Korean Experience and Achievement in Higher Education
ERIC Educational Resources Information Center
Lee, Jeong-Kyu
2001-01-01
The purpose of this paper is to introduce the transition of Korean education reform and to weigh Korean experience and achievement in contemporary higher education. The paper first of all illustrates a historical perspective on higher education in light of educational reform. Secondly, this study reviews the achievements of Korean higher education…
Using Records of Achievement in Higher Education.
ERIC Educational Resources Information Center
Assiter, Alison, Ed.; Shaw, Eileen, Ed.
This collection of 22 essays examines the use of records of achievement (student profiles or portfolios) in higher and vocational education in the United Kingdom. They include: (1) "Records of Achievement: Background, Definitions, and Uses" (Alison Assiter and Eileen Shaw); (2) "Profiling in Higher Education" (Alison Assiter and Angela Fenwick);…
Higher Education Is Key To Achieving MDGs
ERIC Educational Resources Information Center
Association of Universities and Colleges of Canada, 2004
2004-01-01
Imagine trying to achieve the Millennium Development Goals (MGDs) without higher education. As key institutions of civil society, universities are uniquely positioned between the communities they serve and the governments they advise. Through the CIDA-funded University Partnerships in Cooperation and Development program, Canadian universities have…
Higher Education Counts: Achieving Results. 2007 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2007
2007-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results. 2009 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2009
2009-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results. 2006 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2006
2006-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results. 2008 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2008
2008-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results, 2011. Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2011
2011-01-01
This report, issued by the Connecticut Department of Higher Education, reports on trends in higher education for the year 2011. Six goals are presented, each with at least two indicators. Each indicator is broken down into the following subsections: About This Indicator; Highlights; and In the Future. Most indicators also include statistical…
Achieving Quality Learning in Higher Education.
ERIC Educational Resources Information Center
Nightingale, Peggy; O'Neil, Mike
This volume on quality learning in higher education discusses issues of good practice particularly action learning and Total Quality Management (TQM)-type strategies and illustrates them with seven case studies in Australia and the United Kingdom. Chapter 1 discusses issues and problems in defining quality in higher education. Chapter 2 looks at…
Achievable Polarization for Heat-Bath Algorithmic Cooling.
Rodríguez-Briones, Nayeli Azucena; Laflamme, Raymond
2016-04-29
Pure quantum states play a central role in applications of quantum information, both as initial states for quantum algorithms and as resources for quantum error correction. Preparation of highly pure states that satisfy the threshold for quantum error correction remains a challenge, not only for ensemble implementations like NMR or ESR but also for other technologies. Heat-bath algorithmic cooling is a method to increase the purity of a set of qubits coupled to a bath. We investigated the achievable polarization by analyzing the limit when no more entropy can be extracted from the system. In particular, we give an analytic form for the maximum polarization achievable for the case when the initial state of the qubits is totally mixed, and the corresponding steady state of the whole system. It is, however, possible to reach higher polarization while starting with certain states; thus, our result provides an achievable bound. We also give the number of steps needed to get a specific required polarization. PMID:27176508
Achievable Polarization for Heat-Bath Algorithmic Cooling
NASA Astrophysics Data System (ADS)
Rodríguez-Briones, Nayeli Azucena; Laflamme, Raymond
2016-04-01
Pure quantum states play a central role in applications of quantum information, both as initial states for quantum algorithms and as resources for quantum error correction. Preparation of highly pure states that satisfy the threshold for quantum error correction remains a challenge, not only for ensemble implementations like NMR or ESR but also for other technologies. Heat-bath algorithmic cooling is a method to increase the purity of a set of qubits coupled to a bath. We investigated the achievable polarization by analyzing the limit when no more entropy can be extracted from the system. In particular, we give an analytic form for the maximum polarization achievable for the case when the initial state of the qubits is totally mixed, and the corresponding steady state of the whole system. It is, however, possible to reach higher polarization while starting with certain states; thus, our result provides an achievable bound. We also give the number of steps needed to get a specific required polarization.
Higher Education Counts: Achieving Results, 2008. Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2008
2008-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with elementary…
Higher Education Counts: Achieving Results. 2006 Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2006
2006-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the principle vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with…
Higher Education Counts: Achieving Results. 2009 Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2009
2009-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with elementary…
Higher Education Counts: Achieving Results. 2007 Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2007
2007-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with elementary…
Achieving Equity in Higher Education: The Unfinished Agenda
ERIC Educational Resources Information Center
Astin, Alexander W.; Astin, Helen S.
2015-01-01
In this retrospective account of their scholarly work over the past 45 years, Alexander and Helen Astin show how the struggle to achieve greater equity in American higher education is intimately connected to issues of character development, leadership, civic responsibility, and spirituality. While shedding some light on a variety of questions…
Achieving Higher Energies via Passively Driven X-band Structures
NASA Astrophysics Data System (ADS)
Sipahi, Taylan; Sipahi, Nihan; Milton, Stephen; Biedron, Sandra
2014-03-01
Due to their higher intrinsic shunt impedance X-band accelerating structures significant gradients with relatively modest input powers, and this can lead to more compact particle accelerators. At the Colorado State University Accelerator Laboratory (CSUAL) we would like to adapt this technology to our 1.3 GHz L-band accelerator system using a passively driven 11.7 GHz traveling wave X-band configuration that capitalizes on the high shunt impedances achievable in X-band accelerating structures in order to increase our overall beam energy in a manner that does not require investment in an expensive, custom, high-power X-band klystron system. Here we provide the design details of the X-band structures that will allow us to achieve our goal of reaching the maximum practical net potential across the X-band accelerating structure while driven solely by the beam from the L-band system.
Radiosity algorithms using higher order finite element methods
Troutman, R.; Max, N.
1993-08-01
Many of the current radiosity algorithms create a piecewise constant approximation to the actual radiosity. Through interpolation and extrapolation, a continuous solution is obtained. An accurate solution is found by increasing the number of patches which describe the scene. This has the effect of increasing the computation time as well as the memory requirements. By using techniques found in the finite element method, we can incorporate an interpolation function directly into our form factor computation. We can then use less elements to achieve a more accurate solution. Two algorithms, derived from the finite element method, are described and analyzed.
Higher order nonlinear chirp scaling algorithm for medium Earth orbit synthetic aperture radar
NASA Astrophysics Data System (ADS)
Wang, Pengbo; Liu, Wei; Chen, Jie; Yang, Wei; Han, Yu
2015-01-01
Due to the larger orbital arc and longer synthetic aperture time in medium Earth orbit (MEO) synthetic aperture radar (SAR), it is difficult for conventional SAR imaging algorithms to achieve a good imaging result. An improved higher order nonlinear chirp scaling (NLCS) algorithm is presented for MEO SAR imaging. First, the point target spectrum of the modified equivalent squint range model-based signal is derived, where a concise expression is obtained by the method of series reversion. Second, the well-known NLCS algorithm is modified according to the new spectrum and an improved algorithm is developed. The range dependence of the two-dimensional point target reference spectrum is removed by improved CS processing, and accurate focusing is realized through range-matched filter and range-dependent azimuth-matched filter. Simulations are performed to validate the presented algorithm.
Algorithmic and Experimental Computation of Higher-Order Safe Primes
NASA Astrophysics Data System (ADS)
Díaz, R. Durán; Masqué, J. Muñoz
2008-09-01
This paper deals with a class of special primes called safe primes. In the regular definition, an odd prime p is safe if, at least, one of (p±1)/2 is prime. Safe primes have been recommended as factors of RSA moduli. In this paper, the concept of safe primes is extended to higher-order safe primes, and an explicit formula to compute the density of this class of primes in the set of the integers is supplied. Finally, explicit conditions are provided permitting the algorithmic computation of safe primes of arbitrary order. Some experimental results are provided as well.
Elementary School Counselors and Teachers: Collaborators for Higher Student Achievement
ERIC Educational Resources Information Center
Sink, Christopher A.
2008-01-01
In this article I contend that elementary school teachers need to work more closely with school counselors to enhance student learning and academic performance and to narrow the achievement gap among student groups. Research showing the influence that counselors can exert on the educational process is summarized. Using the American School…
Charting the course for nurses' achievement of higher education levels.
Kovner, Christine T; Brewer, Carol; Katigbak, Carina; Djukic, Maja; Fatehi, Farida
2012-01-01
To improve patient outcomes and meet the challenges of the U.S. health care system, the Institute of Medicine recommends higher educational attainment for the nursing workforce. Characteristics of registered nurses (RNs) who pursue additional education are poorly understood, and this information is critical to planning long-term strategies for U.S. nursing education. To identify factors predicting enrollment and completion of an additional degree among those with an associate or bachelor's as their pre-RN licensure degree, we performed logistic regression analysis on data from an ongoing nationally representative panel study following the career trajectories of newly licensed RNs. For associate degree RNs, predictors of obtaining a bachelor's degree are the following: being Black, living in a rural area, nonnursing work experience, higher positive affectivity, higher work motivation, working in the intensive care unit, and working the day shift. For bachelor's RNs, predictors of completing a master's degree are the following: being Black, nonnursing work experience, holding more than one job, working the day shift, working voluntary overtime, lower intent to stay at current employer, and higher work motivation. Mobilizing the nurse workforce toward higher education requires integrated efforts from policy makers, philanthropists, employers, and educators to mitigate the barriers to continuing education. PMID:23158196
Strategies for Increasing Academic Achievement in Higher Education
ERIC Educational Resources Information Center
Ensign, Julene; Woods, Amelia Mays
2014-01-01
Higher education today faces unique challenges. Decreasing student engagement, increasing diversity, and limited resources all contribute to the issues being faced by students, educators, and administrators alike. The unique characteristics and expectations that students bring to their professional programs require new methods of addressing…
A new adaptive GMRES algorithm for achieving high accuracy
Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.
1996-12-31
GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.
Fuzzy Pool Balance: An algorithm to achieve a two dimensional balance in distribute storage systems
NASA Astrophysics Data System (ADS)
Wu, Wenjing; Chen, Gang
2014-06-01
The limitation of scheduling modules and the gradual addition of disk pools in distributed storage systems often result in imbalances among their disk pools in terms of both disk usage and file count. This can cause various problems to the storage system such as single point of failure, low system throughput and imbalanced resource utilization and system loads. An algorithm named Fuzzy Pool Balance (FPB) is proposed here to solve this problem. The input of FPB is the current file distribution among disk pools and the output is a file migration plan indicating what files are to be migrated to which pools. FPB uses an array to classify the files by their sizes. The file classification array is dynamically calculated with a defined threshold named Tmax that defines the allowed pool disk usage deviations. File classification is the basis of file migration. FPB also defines the Immigration Pool (IP) and Emigration Pool (EP) according to the pool disk usage and File Quantity Ratio (FQR) that indicates the percentage of each category of files in each disk pool, so files with higher FQR in an EP will be migrated to IP(s) with a lower FQR of this file category. To verify this algorithm, we implemented FPB on an ATLAS Tier2 dCache production system. The results show that FPB can achieve a very good balance in both free space and file counts, and adjusting the threshold value Tmax and the correction factor to the average FQR can achieve a tradeoff between free space and file count.
DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik
2015-11-01
This study was designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and 1 year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal actor-partner interdependence model) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. PMID:26402901
ERIC Educational Resources Information Center
Kaminskiene, Lina; Stasiunaitiene, Egle
2013-01-01
The article identifies the validity of assessment of non-formal and informal learning achievements (NILA) as one of the key factors for encouraging further development of the process of assessing and recognising non-formal and informal learning achievements in higher education. The authors analyse why the recognition of non-formal and informal…
ERIC Educational Resources Information Center
Arredondo, Patricia; Castillo, Linda G.
2011-01-01
Latina/o student achievement is a priority for the American Association of Hispanics in Higher Education (AAHHE). To date, AAHHE has worked deliberately on this agenda. However, well-established higher education associations such as the Association of American Universities (AAU) and the Association of Public and Land-grant Universities (APLU) are…
Relationship between Study Habits and Academic Achievement of Higher Secondary School Students
ERIC Educational Resources Information Center
Lawrence, A. S. Arul
2014-01-01
The present study was probed to find the significant relationship between study habits and academic achievement of higher secondary school students with reference to the background variables. Survey method was employed. Data for the study were collected from 300 students in 13 higher secondary schools using Study Habits Inventory by V.G. Anantha…
A general higher-order remap algorithm for ALE calculations
Chiravalle, Vincent P
2011-01-05
A numerical technique for solving the equations of fluid dynamics with arbitrary mesh motion is presented. The three phases of the Arbitrary Lagrangian Eulerian (ALE) methodology are outlined: the Lagrangian phase, grid relaxation phase and remap phase. The Lagrangian phase follows a well known approach from the HEMP code; in addition the strain rate andflow divergence are calculated in a consistent manner according to Margolin. A donor cell method from the SALE code forms the basis of the remap step, but unlike SALE a higher order correction based on monotone gradients is also added to the remap. Four test problems were explored to evaluate the fidelity of these numerical techniques, as implemented in a simple test code, written in the C programming language, called Cercion. Novel cell-centered data structures are used in Cercion to reduce the complexity of the programming and maximize the efficiency of memory usage. The locations of the shock and contact discontinuity in the Riemann shock tube problem are well captured. Cercion demonstrates a high degree of symmetry when calculating the Sedov blast wave solution, with a peak density at the shock front that is similar to the value determined by the RAGE code. For a flyer plate test problem both Cercion and FLAG give virtually the same velocity temporal profile at the target-vacuum interface. When calculating a cylindrical implosion of a steel shell, Cercion and FLAG agree well and the Cercion results are insensitive to the use of ALE.
ERIC Educational Resources Information Center
Schmid, Richard F.; Bernard, Robert M.; Borokhovski, Eugene; Tamim, Rana; Abrami, Philip C.; Wade, C. Anne; Surkes, Michael A.; Lowerison, Gretchen
2009-01-01
This paper reports the findings of a Stage I meta-analysis exploring the achievement effects of computer-based technology use in higher education classrooms (non-distance education). An extensive literature search revealed more than 6,000 potentially relevant primary empirical studies. Analysis of a representative sample of 231 studies (k = 310)…
Leveraging Quality Improvement to Achieve Student Learning Assessment Success in Higher Education
ERIC Educational Resources Information Center
Glenn, Nancy Gentry
2009-01-01
Mounting pressure for transformational change in higher education driven by technology, globalization, competition, funding shortages, and increased emphasis on accountability necessitates that universities implement reforms to demonstrate responsiveness to all stakeholders and to provide evidence of student achievement. In the face of the demand…
An Exploratory Study of the Achievement of the Twenty-First Century Skills in Higher Education
ERIC Educational Resources Information Center
Ghaith, Ghazi
2010-01-01
Purpose: The purpose of this paper is to present the results of a survey study of the achievement of twenty-first century skills in higher education. Design/methodology/approach: The study employs a quantitative survey design. Findings: The findings indicate that the basic scientific and technological skills of reading critically and writing…
Achieving Higher Levels of Success for A.D.H.D. Students Working in Collaborative Groups
ERIC Educational Resources Information Center
Simplicio, Joseph S. C.
2007-01-01
This article explores a new and innovative strategy for helping students with Attention Deficit Hyperactivity Disorder (A.D.H.D.) achieve higher levels of academic success when working in collaborative groups. Since the research indicates that students with this disorder often have difficulty in maintaining their concentration this strategy is…
ERIC Educational Resources Information Center
Magen-Nagar, Noga
2016-01-01
The purpose of the current study is to explore the effects of learning strategies on Mathematical Literacy (ML) of students in higher and lower achieving countries. To address this issue, the study utilizes PISA2002 data to conduct a multi-level analysis (HLM) of Hong Kong and Israel students. In PISA2002, Israel was rated 31st in Mathematics,…
An Analysis of Factors Influencing the Achievement of Higher Education by Chief Fire Officers
ERIC Educational Resources Information Center
Ditch, Robert L.
2012-01-01
The leadership of the United States Fire Service (FS) believes that higher education increases the professionalism of FS members. The research problem at the research site, which is a multisite fire department located in southeastern United States, was the lack of research-based findings on the factors influencing the achievement of higher…
Fast algorithm for scaling analysis with higher-order detrending moving average method
NASA Astrophysics Data System (ADS)
Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken
2016-05-01
Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.
Higher-Order, Space-Time Adaptive Finite Volume Methods: Algorithms, Analysis and Applications
Minion, Michael
2014-04-29
The four main goals outlined in the proposal for this project were: 1. Investigate the use of higher-order (in space and time) finite-volume methods for fluid flow problems. 2. Explore the embedding of iterative temporal methods within traditional block-structured AMR algorithms. 3. Develop parallel in time methods for ODEs and PDEs. 4. Work collaboratively with the Center for Computational Sciences and Engineering (CCSE) at Lawrence Berkeley National Lab towards incorporating new algorithms within existing DOE application codes.
ERIC Educational Resources Information Center
Jacobs, Nicky; Harvey, David
2005-01-01
Differences in family factors in determining academic achievement were investigated by testing 432 parents in nine independent, coeducational Melbourne schools. Schools were ranked and categorized into three groups (high, medium and low), based on student achievement (ENTER) scores in their final year of secondary school and school improvement…
Liu, Yinxiao; Jin, Dakai; Saha, Punam K.
2015-01-01
Adult bone diseases, especially osteoporosis, lead to increased risk of fracture associated with substantial morbidity, mortality, and financial costs. Clinically, osteoporosis is defined by low bone mineral density (BMD); however, increasing evidence suggests that the micro-architectural quality of trabecular bone (TB) is an important determinant of bone strength and fracture risk. Accurate measurement of trabecular thickness and marrow spacing is of significant interest for early diagnosis of osteoporosis or treatment effects. Here, we present a new robust algorithm for computing TB thickness and marrow spacing at a low resolution achievable in vivo. The method uses a star-line tracing technique that effectively deals with partial voluming effects of in vivo imaging where voxel size is comparable to TB thickness. Experimental results on cadaveric ankle specimens have demonstrated the algorithm’s robustness (ICC>0.98) under repeat scans of multi-row detector computed tomography (MD-CT) imaging. It has been observed in experimental results that TB thickness and marrow spacing measures as computed by the new algorithm have strong association (R2 ∈{0.85, 0.87}) with TB’s experimental mechanical strength measures. PMID:27330678
Leveraging People-Related Maturity Issues for Achieving Higher Maturity and Capability Levels
NASA Astrophysics Data System (ADS)
Buglione, Luigi
During the past 20 years Maturity Models (MM) become a buzzword in the ICT world. Since the initial Crosby's idea in 1979, plenty of models have been created in the Software & Systems Engineering domains, addressing various perspectives. By analyzing the content of the Process Reference Models (PRM) in many of them, it can be noticed that people-related issues have little weight in the appraisals of the capabilities of organizations while in practice they are considered as significant contributors in traditional process and organizational performance appraisals, as stressed instead in well-known Performance Management models such as MBQA, EFQM and BSC. This paper proposes some ways for leveraging people-related maturity issues merging HR practices from several types of maturity models into the organizational Business Process Model (BPM) in order to achieve higher organizational maturity and capability levels.
Han, Qi-Gang; Yang, Wen-Ke; Zhu, Pin-Wen; Ban, Qing-Chu; Yan, Ni; Zhang, Qiang
2013-07-01
In order to increase the maximum cell pressure of the cubic high pressure apparatus, we have developed a new structure of tungsten carbide cubic anvil (tapered cubic anvil), based on the principle of massive support and lateral support. Our results indicated that the tapered cubic anvil has some advantages. First, tapered cubic anvil can push the transfer rate of pressure well into the range above 36.37% compare to the conventional anvil. Second, the rate of failure crack decreases about 11.20% after the modification of the conventional anvil. Third, the limit of static high-pressure in the sample cell can be extended to 13 GPa, which can increase the maximum cell pressure about 73.3% than that of the conventional anvil. Fourth, the volume of sample cell compressed by tapered cubic anvils can be achieved to 14.13 mm(3) (3 mm diameter × 2 mm long), which is three and six orders of magnitude larger than that of double-stage apparatus and diamond anvil cell, respectively. This work represents a relatively simple method for achieving higher pressures and larger sample cell. PMID:23902079
Pyramiding B genes in cotton achieves broader but not always higher resistance to bacterial blight.
Essenberg, Margaret; Bayles, Melanie B; Pierce, Margaret L; Verhalen, Laval M
2014-10-01
Near-isogenic lines of upland cotton (Gossypium hirsutum) carrying single, race-specific genes B4, BIn, and b7 for resistance to bacterial blight were used to develop a pyramid of lines with all possible combinations of two and three genes to learn whether the pyramid could achieve broad and high resistance approaching that of L. A. Brinkerhoff's exceptional line Im216. Isogenic strains of Xanthomonas axonopodis pv. malvacearum carrying single avirulence (avr) genes were used to identify plants carrying specific resistance (B) genes. Under field conditions in north-central Oklahoma, pyramid lines exhibited broader resistance to individual races and, consequently, higher resistance to a race mixture. It was predicted that lines carrying two or three B genes would also exhibit higher resistance to race 1, which possesses many avr genes. Although some enhancements were observed, they did not approach the level of resistance of Im216. In a growth chamber, bacterial populations attained by race 1 in and on leaves of the pyramid lines decreased significantly with increasing number of B genes in only one of four experiments. The older lines, Im216 and AcHR, exhibited considerably lower bacterial populations than any of the one-, two-, or three-B-gene lines. A spreading collapse of spray-inoculated AcBIn and AcBInb7 leaves appears to be a defense response (conditioned by BIn) that is out of control. PMID:24655289
Effects of Traditional, Blended and E-Learning on Students' Achievement in Higher Education
ERIC Educational Resources Information Center
Al-Qahtani, Awadh A. Y.; Higgins, S. E.
2013-01-01
The study investigates the effect of e-learning, blended learning and classroom learning on students' achievement. Two experimental groups together with a control group from Umm Al-Qura University in Saudi Arabia were identified randomly. To assess students' achievement in the different groups, pre- and post-achievement tests were used. The…
Harmon, Tyler S; Crabtree, Michael D; Shammas, Sarah L; Posey, Ammon E; Clarke, Jane; Pappu, Rohit V
2016-09-01
Many intrinsically disordered proteins (IDPs) participate in coupled folding and binding reactions and form alpha helical structures in their bound complexes. Alanine, glycine, or proline scanning mutagenesis approaches are often used to dissect the contributions of intrinsic helicities to coupled folding and binding. These experiments can yield confounding results because the mutagenesis strategy changes the amino acid compositions of IDPs. Therefore, an important next step in mutagenesis-based approaches to mechanistic studies of coupled folding and binding is the design of sequences that satisfy three major constraints. These are (i) achieving a target intrinsic alpha helicity profile; (ii) fixing the positions of residues corresponding to the binding interface; and (iii) maintaining the native amino acid composition. Here, we report the development of a G: enetic A: lgorithm for D: esign of I: ntrinsic secondary S: tructure (GADIS) for designing sequences that satisfy the specified constraints. We describe the algorithm and present results to demonstrate the applicability of GADIS by designing sequence variants of the intrinsically disordered PUMA system that undergoes coupled folding and binding to Mcl-1. Our sequence designs span a range of intrinsic helicity profiles. The predicted variations in sequence-encoded mean helicities are tested against experimental measurements. PMID:27503953
ERIC Educational Resources Information Center
Rouse, Martyn; Florian, Lani
2006-01-01
This paper reports on a multi-method study that examined the effects of including higher and lower proportions of students designated as having special educational needs on student achievement in secondary schools. It explores some of the issues involved in conducting such research and considers the extent to which newly available national data in…
ERIC Educational Resources Information Center
Borman, Geoffrey D.; Kimball, Steven M.
2005-01-01
Using standards-based evaluation ratings for nearly 400 teachers, and achievement results for over 7,000 students from grades 4-6, this study investigated the distribution and achievement effects of teacher quality in Washoe County, a mid-sized school district serving Reno and Sparks, Nevada. Classrooms with higher concentrations of minority,…
Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1998-01-01
This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.
What Is the Best Way to Achieve Broader Reach of Improved Practices in Higher Education?
ERIC Educational Resources Information Center
Kezar, Adrianna
2011-01-01
This article examines a common problem in higher education--how to create more widespread use of improved practices, often commonly referred to as innovations. I argue that policy models of scale-up are often advocated in higher education but that they have a dubious history in community development and K-12 education and that higher education…
ERIC Educational Resources Information Center
Catalano, D. Chase J.
2015-01-01
Trans* men have not, as yet, received specific research attention in higher education. Based on intensive interviews with 25 trans* men enrolled in colleges or universities in New England, I explore their experiences in higher education. I analyze participants' descriptions of supports and challenges in their collegiate environments, as well as…
Using the Internet To Deliver Higher Education: A Cautionary Tale about Achieving Good Practice.
ERIC Educational Resources Information Center
Coombs, Steven J.; Rodd, Jillian
2001-01-01
Reviews the development and delivery of a higher education course module that was designed to provide remote learners in England with computer-supported solutions to access higher education as part of a technology-assisted distance education program. Highlights include use of a Web site; e-mail; videoconferencing; and student attrition rate.…
Higher Education and the Achievement (and/or Prevention) of Equity and Social Justice
ERIC Educational Resources Information Center
Brennan, John; Naidoo, Rajani
2008-01-01
The article examines the theoretical and empirical literature on higher education's role in relation to social equity and related notions of citizenship, social justice, social cohesion and meritocracy. It considers both the education and the research functions of higher education and how these impact upon different sections of society, on who…
ERIC Educational Resources Information Center
Murphy, David; Williams, Jeff
1997-01-01
Describes four successful cost-containment initiatives of the Midwestern Higher Education Commission, which was established to advance higher education in the Midwest through interstate cooperation. Projects include development of Academic Scheduling and Management Software; Internet-based activities; the Virtual Private Network, to reduce…
Colonialism on Campus: A Critique of Mentoring to Achieve Equity in Higher Education.
ERIC Educational Resources Information Center
Collins, Roger L.
In order to reconceptualize the mentoring relationship in higher education, parallels to colonialist strategies of subordination are drawn. The objective is to stimulate renewed thinking and action more consistent with stated policy goals in higher education. One of the primary functions of a mentor or sponsor is to exercise personal power to…
The Effects of Higher Education/Military Service on Achievement Levels of Police Academy Cadets.
ERIC Educational Resources Information Center
Johnson, Thomas Allen
This study compared levels of achievement of three groups of Houston (Texas) police academy cadets: those with no military service but with 60 or more college credit hours, those with military service and 0 hours of college credit, and those with military service and 1 to 59 hours of college credit. Prior to 1991, police cadets in Houston were…
ERIC Educational Resources Information Center
Dupont, Serge; Meert, Gaëlle; Galand, Benoît; Nils, Frédéric
2013-01-01
Research on academic achievement at a university has mainly focused on success and persistence among first year students. Very few studies have looked at delay or failure in the completion of a final dissertation. However, this phenomenon could affect a substantial proportion of students and has considerable costs. The purpose of the present study…
Gender Segregation in Higher Education: Effects of Aspirations, Mathematics Achievement, and Income.
ERIC Educational Resources Information Center
Wilson, Kenneth L; Boldizar, Janet P.
1990-01-01
Analyzes the relationships among mathematics achievement levels, income potential, high school aspirations, and the gender segregation of bachelor's degrees. Investigates how gender segregation changed between 1973 and 1983. Concludes that gender segregation is present at the high school and bachelor's levels. Maintains that psychological barriers…
ERIC Educational Resources Information Center
Mc Beth, Maureen
2010-01-01
This study provides important insights into the relationship between the epistemological beliefs of community college students, the selection of learning strategies, and academic achievement. This study employed a quantitative survey design. Data were collected by surveying students at a community college during the spring semester of 2010. The…
Success in Higher Education: The Challenge to Achieve Academic Standing and Social Position
ERIC Educational Resources Information Center
Life, James
2015-01-01
When students look at their classmates in the classroom, consciously or unconsciously, they see competitors both for academic recognition and social success. How do they fit in relation to others and how do they succeed in achieving both? Traditional views on the drive to succeed and the fear of failure are well known as motivators for achieving…
ERIC Educational Resources Information Center
Parisi, Joe
2012-01-01
This paper explores several research questions that identify differences between conditionally admitted students and regularly admitted students in terms of achievement results at one institution. The research provides specific variables as well as relationships including historical and comparative aggregate data from 2009 and 2010 that indicate…
The Little District that Could: Literacy Reform Leads to Higher Achievement in California District
ERIC Educational Resources Information Center
Kelly, Patricia R.; Budicin-Senters, Antoinette; King, L. McLean
2005-01-01
This article describes educational reform developed over a 10-year period in California's Lemon Grove School District, which resulted in a steady and remarkable upward shift in achievement for the students of this multicultural district just outside San Diego. Six elements of literacy reform emerged as the most significant factors affecting…
ERIC Educational Resources Information Center
Usun, Salih
2004-01-01
The main aim of this study was to determine the opinions of the undergraduate students and faculty members on factors that affect student learning and academic achievement. The sub aims of this study were to: (1) Develop a mean rank ordering of the 23 dimensions affecting learning, for both the students and faculty, and determine the similarities…
ERIC Educational Resources Information Center
Eshetu, Amogne Asfaw
2015-01-01
Gender is among the determinant factors affecting students' academic achievement. This paper tried to investigate the impact of gender on academic performance of preparatory secondary school students based on 2014 EHEECE result. Ex post facto research design was used. To that end, data were collected from 3243 students from eight purposively…
ERIC Educational Resources Information Center
Myers, Carrie B.; Brown, Doreen E.; Pavel, D. Michael
2010-01-01
The purpose of this study was to assess how a comprehensive precollege intervention and developmental program among low-income high school students contributed to college enrollment outcomes measured in 2006. Our focus was on the Fifth Cohort of the Washington State Achievers (WSA) Program, which provides financial, academic, and college…
WISC-III and CAS: Which Correlates Higher with Achievement for a Clinical Sample?
ERIC Educational Resources Information Center
Naglieri, Jack A.; De Lauder, Brianna Y.; Goldstein, Sam; Schwebech, Adam
2006-01-01
The relationships between Wechsler Intelligence Scale for Children-Third Edition (WISC-III) and the Cognitive Assessment System (CAS) with the Woodcock-Johnson Tests of Achievement (WJ-III) were examined for a sample of 119 children (87 males and 32 females) ages 6 to 16. The sample was comprised of children who were referred to a specialty clinic…
Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, John
2016-01-01
The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recon-struction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR
ERIC Educational Resources Information Center
Marschke, Robyn; Laursen, Sandra; Nielsen, Joyce McCarl; Rankin, Patricia
2007-01-01
Progress toward equitable gender representation among faculty in higher education has been "glacial" since the early 1970s (Glazer-Raymo, 1999; Lomperis, 1990; Trower & Chait, 2002). Women, who now make up a majority of undergraduate degree earners and approximately 46% of Ph.D. earners nationwide (National Center for Education Statistics [NCES],…
ERIC Educational Resources Information Center
Association of Universities and Colleges of Canada, 2004
2004-01-01
As Canada's opportunities to claim international leadership are assessed, the best prospects lie in a combination of our impressive higher education and research commitments, civic and institutional values, and quality of life. This paper concludes that as an exporting country, the benefits will come in economic growth. As citizens of the world,…
Linking Emotional Intelligence to Achieve Technology Enhanced Learning in Higher Education
ERIC Educational Resources Information Center
Kruger, Janette; Blignaut, A. Seugnet
2013-01-01
Higher education institutions (HEIs) increasingly use technology-enhanced learning (TEL) environments (e.g. blended learning and e-learning) to improve student throughput and retention rates. As the demand for TEL courses increases, expectations rise for faculty to meet the challenge of using TEL effectively. The promises that TEL holds have not…
ERIC Educational Resources Information Center
Ho, Hsuan-Fu; Lin, Ming-Huang; Yang, Cheng-Cheng
2015-01-01
International knowledge and skills are essential for success in today's highly competitive global marketplace. As one of the key providers of such knowledge and skills, universities have become a key focus of the internationalization strategies of governments throughout the world. While the internationalization of higher education clearly has…
Achieving Higher Accuracy in the Gamma-Ray Spectrocopic Assay of Holdup
Russo, P.A.; Wenz, T.R.; Smith, S.E.; Harris, J.F.
2000-09-01
compelling to use these procedures. The algorithms and the procedures are simple, general, and easily automated for use plant-wide. This paper shows the derivation of the new, generalized correction algorithms for finite-source and self-attenuation effects. It also presents an analysis of the sensitivity of the holdup result to the uncertainty in the empirical parameter when one or both corrections are made. The paper uses specific examples of the magnitudes of finite-source and self-attenuation corrections to measurements that were made in the field. It discusses the automated implementation of the correction procedure.
ERIC Educational Resources Information Center
Klapproth, Florian
2015-01-01
Two objectives guided this research. First, this study examined how well teachers' tracking decisions contribute to the homogenization of their students' achievements. Second, the study explored whether teachers' tracking decisions would be outperformed in homogenizing the students' achievements by statistical models of tracking decisions. These…
Moving to higher ground: Closing the high school science achievement gap
NASA Astrophysics Data System (ADS)
Mebane, Joyce Graham
The purpose of this study was to examine the perceptions of West High School constituents (students, parents, teachers, administrators, and guidance counselors) about the readiness and interest of African American students at West High School to take Advanced Placement (AP) and International Baccalaureate (IB) science courses as a strategy for closing the achievement gap. This case study utilized individual interviews and questionnaires for data collection. The participants were selected biology students and their parents, teachers, administrators, and guidance counselors at West High School. The results of the study indicated that just over half the students and teachers, most parents, and all guidance counselors thought African American students were prepared to take AP science courses. Only one of the three administrators thought the students were prepared to take AP science courses. Between one-half and two-thirds of the students, parents, teachers, and administrators thought students were interested in taking an AP science course. Only two of the guidance counselors thought there was interest among the African American students in taking AP science courses. The general consensus among the constituents about the readiness and interest of African American students at West High School to take IB science courses was that it is too early in the process to really make definitive statements. West is a prospective IB school and the program is new and not yet in place. Educators at the West High School community must find reasons to expect each student to succeed. Lower expectations often translate into lower academic demands and less rigor in courses. Lower academic demands and less rigor in courses translate into less than adequate performance by students. When teachers and administrators maintain high expectations, they encourage students to aim high rather than slide by with mediocre effort (Lumsden, 1997). As a result of the study, the following suggestions should
ERIC Educational Resources Information Center
Alstete, Jeffrey W.
2004-01-01
This book focuses on contemporary accreditation, why it matters, and how it can be done effectively. The author covers historical background, getting started, strategies for achieving accreditation, and visions for future academic success, with examples and case studies. Accreditation is the primary way of ensuring the quality of higher education…
ERIC Educational Resources Information Center
Gulacar, Ozcan; Eilks, Ingo; Bowman, Charles R.
2014-01-01
This paper reports a comparison of a group of higher-and lower-achieving undergraduate chemistry students, 17 in total, as separated on their ability in stoichiometry. This exploratory study of 17 students investigated parallels and differences in the students' general and domain-specific cognitive abilities. Performance, strategies, and…
ERIC Educational Resources Information Center
Keeley, Thomas Allen
2010-01-01
The purpose of this study was to determine whether the areas of teaching methods, teacher-student relationships, school structure, school-community partnerships or school leadership were significantly embedded in practice and acted as a change agent among school systems that achieve higher than expected results on their state standardized testing…
ERIC Educational Resources Information Center
Sarwar, Muhammad; Ashrafi, Ghulam Muhammad
2014-01-01
The purpose of this study was to analyze Students' Commitment, Engagement and Locus of Control as predictors of Academic Achievement at Higher Education Level. We used analytical model and conclusive research approach to conduct study and survey method for data collection. We selected 369 students using multistage sampling technique from…
ERIC Educational Resources Information Center
Schlechter, Melissa; Milevsky, Avidan
2010-01-01
The purpose of the current study is to determine the interconnection between parental level of education, psychological well-being, academic achievement and reasons for pursuing higher education in adolescents. Participants included 439 college freshmen from a mid-size state university in the northeastern USA. A survey, including indices of…
Achieving Higher Diagnostic Results in Stereotactic Brain Biopsy by Simple and Novel Technique
Gulsen, Salih
2015-01-01
BACKGROUND: Neurosurgeons have preferred to perform the stereotactic biopsy for pathologic diagnosis when the intracranial pathology located eloquent areas and deep sites of the brain. AIM: To get a higher ratio of definite pathologic diagnosis during stereotactic biopsy and develop practical method. MATERIAL AND METHODS: We determined at least two different target points and two different trajectories to take brain biopsy during stereotactic biopsy. It is a different way from the conventional stereotactic biopsy method in which one point has been selected to take a biopsy. We separated our patients into two groups, group 1 (N=10), and group 2 (N= 19). We chose one target to take a biopsy in group 1, and two different targets and two different trajectories in group 2. In group 2, one patient underwent craniotomy due to hemorrhage at the site of the biopsy during tissue biting. However, none of the patients in both groups suffered any neurological complication related biopsy procedure. RESULTS: In group 1, two of 10 cases, and, in group 2, fourteen of 19 cases had positive biopsy harvesting. These results showed statistically significant difference between group 1 and group 2 (P<0.05). CONCLUSIONS: Regarding these results, choosing more than one trajectories and taking at least six specimens from each target provides higher diagnostic rate in stereotaxic biopsy taking method.
Jet algorithms in electron-positron annihilation: perturbative higher order predictions
NASA Astrophysics Data System (ADS)
Weinzierl, Stefan
2011-02-01
This article gives results on several jet algorithms in electron-positron annihilation: Considered are the exclusive sequential recombination algorithms Durham, Geneva, Jade-E0 and Cambridge, which are typically used in electron-positron annihilation. In addition also inclusive jet algorithms are studied. Results are provided for the inclusive sequential recombination algorithms Durham, Aachen and anti- k t , as well as the infrared-safe cone algorithm SISCone. The results are obtained in perturbative QCD and are N3LO for the two-jet rates, NNLO for the three-jet rates, NLO for the four-jet rates and LO for the five-jet rates.
Beaujean, A Alexander; Parkin, Jason; Parker, Sonia
2014-09-01
Previous research using the Cattell-Horn-Carroll (CHC) theory of cognitive abilities has shown a relationship between cognitive ability and academic achievement. Most of this research, however, has been done using the Woodcock-Johnson family of instruments with a higher order factor model. For CHC theory to grow, research should be done with other assessment instruments and tested with other factor models. This study examined the relationship between different factor models of CHC theory and the factors' relationships with language-based academic achievement (i.e., reading and writing). Using the co-norming sample for the Wechsler Intelligence Scale for Children--4th Edition and the Wechsler Individual Achievement Test--2nd Edition, we found that bifactor and higher order models of the subtests of the Wechsler Intelligence Scale for Children-4th Edition produced a different set of Stratum II factors, which, in turn, have very different relationships with the language achievement variables of the Wechsler Individual Achievement Test--2nd Edition. We conclude that the factor model used to represent CHC theory makes little difference when general intelligence is of major interest, but it makes a large difference when the Stratum II factors are of primary concern, especially when they are used to predict other variables. PMID:24840178
NASA Astrophysics Data System (ADS)
Zeng, Li; Jansen, Christian; Unser, Michael A.; Hunziker, Patrick
2001-12-01
High resolution multidimensional image data yield huge datasets. For compression and analysis, 2D approaches are often used, neglecting the information coherence in higher dimensions, which can be exploited for improved compression. We designed a wavelet compression algorithm suited for data of arbitrary dimensions, and assessed its ability for compression of 4D medical images. Basically, separable wavelet transforms are done in each dimension, followed by quantization and standard coding. Results were compared with conventional 2D wavelet. We found that in 4D heart images, this algorithm allowed high compression ratios, preserving diagnostically important image features. For similar image quality, compression ratios using the 3D/4D approaches were typically much higher (2-4 times per added dimension) than with the 2D approach. For low-resolution images created with the requirement to keep predefined key diagnostic information (contractile function of the heart), compression ratios up to 2000 could be achieved. Thus, higher-dimensional wavelet compression is feasible, and by exploitation of data coherence in higher image dimensions allows much higher compression than comparable 2D approaches. The proven applicability of this approach to multidimensional medical imaging has important implications especially for the fields of image storage and transmission and, specifically, for the emerging field of telemedicine.
NASA Astrophysics Data System (ADS)
Putro, Budi Laksono; Surendro, Kridanto; Herbert
2016-02-01
Data is a vital asset in a business enterprise in achieving organizational goals. Data and information affect the decision-making process on the various activities of an organization. Data problems include validity, quality, duplication, control over data, and the difficulty of data availability. Data Governance is the way the company / institution manages its data assets. Data Governance covers the rules, policies, procedures, roles and responsibilities, and performance indicators that direct the overall management of data assets. Studies on governance data or information aplenty recommend the importance of cultural factors in the governance of research data. Among the organization's leadership culture has a very close relationship, and there are two concepts turn, namely: Culture created by leaders, leaders created by culture. Based on the above, this study exposure to the theme "Leadership and Culture Of Data Governance For The Achievement Of Higher Education Goals (Case Study: Indonesia University Of Education)". Culture and Leadership Model Development of on Higher Education in Indonesia would be made by comparing several models of data governance, organizational culture, and organizational leadership on previous studies based on the advantages and disadvantages of each model to the existing organizational business. Results of data governance model development is shown in the organizational culture FPMIPA Indonesia University Of Education today is the cultural market and desired culture is a culture of clan. Organizational leadership today is Individualism Index (IDV) (83.72%), and situational leadership on selling position.
NASA Astrophysics Data System (ADS)
Erlick, Katherine
"The stereotype of engineers is that they are not people oriented; the stereotype implies that engineers would not work well in teams---that their task emphasis is a solo venture and does not encourage social aspects of collaboration" (Miner & Beyerlein, 1999, p. 16). The problem is determining the best method of providing a motivating environment where design engineers may contribute within a team in order to achieve higher performance in the organization. Theoretically, self-directed work teams perform at higher levels. But, allowing a design engineer to contribute to the team while still maintaining his or her anonymity is the key to success. Therefore, a motivating environment must be established to encourage greater self-actualization in design engineers. The purpose of this study is to determine the favorable motivational environment for design engineers and describe the comparison between two aerospace design-engineering teams: one self-directed and the other manager directed. Following the comparison, this study identified whether self-direction or manager-direction provides the favorable motivational environment for operating as a team in pursuit of achieving higher performance. The methodology used in this research was the case study focusing on the team's levels of job satisfaction and potential for higher performance. The collection of data came from three sources, (a) surveys, (b) researcher observer journal and (c) collection of artifacts. The surveys provided information regarding personal behavior characteristics, potentiality for higher performance and motivational attributes. The researcher journal provided information regarding team dynamics, individual interaction, conflict and conflict resolution. The milestone for performance was based on the collection of artifacts from the two teams. The findings from this study illustrated that whether the team was manager-directed or self-directed does not appear to influence the needs and wants of the
General relaxation schemes in multigrid algorithms for higher order singularity methods
NASA Technical Reports Server (NTRS)
Oskam, B.; Fray, J. M. J.
1981-01-01
Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.
ERIC Educational Resources Information Center
Stringer, Neil
2008-01-01
Advocates of using a US-style SAT for university selection claim that it is fairer to applicants from disadvantaged backgrounds than achievement tests because it assesses potential, not achievement, and that it allows finer discrimination between top applicants than GCEs. The pros and cons of aptitude tests in principle are discussed, focusing on…
ERIC Educational Resources Information Center
Siahi, Evans Atsiaya; Maiyo, Julius K.
2015-01-01
The studies on the correlation of academic achievement have paved way for control and manipulation of related variables for quality results in schools. In spite of the facts that schools impart uniform classroom instructions to all students, wide range of difference is observed in their academic achievement. The study sought to determine the…
ERIC Educational Resources Information Center
Latha, Prema
2014-01-01
Disturbing sounds are often referred to as noise, and if extreme enough in degree, intensity or frequency, it is referred to as noise pollution. Achievement refers to a change in study behavior in relation to their noise sensitivity and learning in the educational sense by achieving results in changed responses to certain types of stimuli like…
ERIC Educational Resources Information Center
Wright, Bobby
This paper reviews the history of higher education for Native Americans and proposes change strategies. Assimilation was the primary goal of higher education from early colonial times to the 20th century. Tribal response ranged from resistance to support of higher education. When the Federal Government began to dominate Native education in the…
ERIC Educational Resources Information Center
Pickert, Sarah M.
This report discusses the response of colleges and universities in the United States to the need of graduate students to become equipped to make personal and public policy decisions as citizens of an international society. Curriculum changes are showing a tightening of foreign language standards in schools of higher education and, throughout the…
ERIC Educational Resources Information Center
Ehrlich, Jenifer, Ed.
2006-01-01
"Forum Focus" was a semi-annual magazine of the Business-Higher Education Forum (BHEF) that featured articles on the role of business and higher education on significant issues affecting the P-16 education system. The magazine typically focused on themes featured at the most recently held semi-annual Forum meeting at the time of publication.…
NASA Astrophysics Data System (ADS)
Jun, Xie Cheng; Su, Yan; Wei, Zhang
2006-08-01
In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.
ERIC Educational Resources Information Center
Brooks, Candice Elaine
2012-01-01
This article discusses the findings of an exploratory qualitative study that examined the influences of individual and collective sociocultural identities on the community involvements and high academic achievement of 10 Black alumni who attended a predominantly White institution between 1985 and 2008. Syntagmatic narrative analysis and…
ERIC Educational Resources Information Center
Lorch, Robert F., Jr.; Lorch, Elizabeth P.; Freer, Benjamin Dunham; Dunlap, Emily E.; Hodell, Emily C.; Calderhead, William J.
2014-01-01
Students (n = 1,069) from 60 4th-grade classrooms were taught the control of variables strategy (CVS) for designing experiments. Half of the classrooms were in schools that performed well on a state-mandated test of science achievement, and half were in schools that performed relatively poorly. Three teaching interventions were compared: an…
ERIC Educational Resources Information Center
Wurst, Christian; Smarkola, Claudia; Gaffney, Mary Anne
2008-01-01
Three years of graduating business honors cohorts in a large urban university were sampled to determine whether the introduction of ubiquitous laptop computers into the honors program contributed to student achievement, student satisfaction and constructivist teaching activities. The first year cohort consisted of honors students who did not have…
Guijarro-Herraiz, Carlos; Masana-Marin, Luis; Galve, Enrique; Cordero-Fort, Alberto
2014-01-01
Reducing low density lipoprotein-cholesterol (LDL-c) is the main lipid goal of treatment for patients with very high cardiovascular risk. In these patients the therapeutic goal is to achieve a LDL-c lower than 70 mg/dL, as recommended by the guidelines for cardiovascular prevention commonly used in Spain and Europe. However, the degree of achieving these objectives in this group of patients is very low. This article describes the prevalence of the problem and the causes that motivate it. Recommendations and tools that can facilitate the design of an optimal treatment strategy for achieving the goals are also given. In addition, a new tool with a simple algorithm that can allow these very high risk patients to achieve the goals "in two-steps", i.e., with only two doctor check-ups, is presented. PMID:25048471
ERIC Educational Resources Information Center
Mudhovozi, P.; Gumani, M.; Maunganidze, L.; Sodi, T.
2010-01-01
The study explores the attribution styles of in-group and out-group members. Eighty-four (42 female and 42 male) undergraduate students were randomly selected from the Faculty of Education at an institution of higher learning in Zimbabwe. A questionnaire was used to capture the opinions of the participants. The data was analysed using the…
ERIC Educational Resources Information Center
James, Matthew R.
2009-01-01
Leal Filho, MacDermot, and Padgam (1996) contended that post-secondary institutions are well suited to take on leadership responsibilities for society's environmental protection. Higher education has the unique academic freedom to engage in critical thinking and bold experimentation in environmental sustainability (Cortese, 2003). Although…
ERIC Educational Resources Information Center
Mingle, James R., Ed.; Rodriguez, Esther M., Ed.
This report describes initiatives of higher education boards to provide equal educational opportunities for minority students in the following states: (1) Arizona; (2) Colorado; (3) Illinois; (4) Massachusetts; (5) Montana; (6) New York; (7) Ohio; and (8) Tennessee. Evidence of school completion, academic preparation, college participation rates,…
ERIC Educational Resources Information Center
Houston, Don
2010-01-01
While the past two decades have seen significant expansion and harmonisation of quality assurance mechanisms in higher education, there is limited evidence of positive effects on the quality of core processes of teaching and learning. The paradox of the separation of assurance from improvement is explored. A shift in focus from surveillance to…
ERIC Educational Resources Information Center
Jackson, Norman; Ward, Rob
2004-01-01
This article addresses the challenge of developing new conceptual knowledge to help us make better sense of the way that higher education is approaching the "problem" of representing (documenting, certifying and communicating by other means) students' learning for the super-complex world described by Barnett (2000b). The current UK solution to…
Chakraborty, Mohua; Ghosh, Sankar Kumar
2015-04-01
Efficacy of cytochrome c oxidase subunit I (COI) DNA barcode in higher taxon assignment is still under debate in spite of several attempts, using the conventional DNA barcoding methods, to assign higher taxa. Here we try to understand whether nucleotide and amino acid sequence in COI gene carry sufficient information to assign species to their higher taxonomic rank, using 160 species of Indian freshwater fishes. Our results reveal that with increase in the taxonomic rank, sequence conservation decreases for both nucleotides and amino acids. Order level exhibits lowest conservation with 50% of the nucleotides and amino acids being conserved. Among the variable sites, 30-50% were found to carry high information content within an order, while it was 70-80% within a family and 80-99% within a genus. High information content shows sites with almost conserved sequence but varying at one or two locations, which can be due to variations at species or population level. Thus, the potential of COI gene in higher taxon assignment is revealed with validation of ample inherent signals latent in the gene. PMID:24409929
ERIC Educational Resources Information Center
New York City Board of Education, Brooklyn, NY. Office of Research, Evaluation, and Assessment.
A final evaluation was conducted in the 1989-90 school year of New York City (New York) Board of Education's project, Higher Achievement and Improvement Through Instruction with Computers and Scholarly Transition and Resource Systems (HAITI STARS). The project served 524 limited-English-proficient Spanish-speaking students at Far Rockaway High…
ERIC Educational Resources Information Center
Augustin, Marc A.
The Higher Achievement and Improvement Through Instruction with Computers and Scholarly Transition And Resource Systems program (Project HAITI STARS), a federally-funded bilingual education program, served 425 students of limited English proficiency at three high schools in New York City during its fifth contract year. Students received…
Tavares, Eveline Q P; De Souza, Amanda P; Buckeridge, Marcos S
2015-07-01
Cell-wall recalcitrance to hydrolysis still represents one of the major bottlenecks for second-generation bioethanol production. This occurs despite the development of pre-treatments, the prospect of new enzymes, and the production of transgenic plants with less-recalcitrant cell walls. Recalcitrance, which is the intrinsic resistance to breakdown imposed by polymer assembly, is the result of inherent limitations in its three domains. These consist of: (i) porosity, associated with a pectin matrix impairing trafficking through the wall; (ii) the glycomic code, which refers to the fine-structural emergent complexity of cell-wall polymers that are unique to cells, tissues, and species; and (iii) cellulose crystallinity, which refers to the organization in micro- and/or macrofibrils. One way to circumvent recalcitrance could be by following cell-wall hydrolysis strategies underlying plant endogenous mechanisms that are optimized to precisely modify cell walls in planta. Thus, the cell-wall degradation that occurs during fruit ripening, abscission, storage cell-wall mobilization, and aerenchyma formation are reviewed in order to highlight how plants deal with recalcitrance and which are the routes to couple prospective enzymes and cocktail designs with cell-wall features. The manipulation of key enzyme levels in planta can help achieving biologically pre-treated walls (i.e. less recalcitrant) before plants are harvested for bioethanol production. This may be helpful in decreasing the costs associated with producing bioethanol from biomass. PMID:25922489
ERIC Educational Resources Information Center
Baran, Bahar; Kiliç, Eylem
2015-01-01
The purpose of this study is to analyze three separate constructs (demographics, study habits, and technology familiarity) that can be used to identify university students' characteristics and the relationship between each of these constructs with student achievement. A survey method was used for the current study, and the participants included…
Benson, Nicholas F; Kranzler, John H; Floyd, Randy G
2016-10-01
Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models. PMID:27586067
ERIC Educational Resources Information Center
What Works Clearinghouse, 2014
2014-01-01
This study of 952 fifth and sixth graders in Washington, DC, and Alexandria, Virginia, found that students who were offered the "Higher Achievement" program had higher test scores in mathematical problem solving and were more likely to be admitted to and attend private competitive high schools. "Higher Achievement" is a…
Salfity, M.F; Huntley, J.M; Graves, M.J; Marklund, O; Cusack, R; Beauregard, D.A
2005-01-01
Phase contrast magnetic resonance velocity imaging is a powerful technique for quantitative in vivo blood flow measurement. Current practice normally involves restricting the sensitivity of the technique so as to avoid the problem of the measured phase being ‘wrapped’ onto the range −π to +π. However, as a result, dynamic range and signal-to-noise ratio are sacrificed. Alternatively, the true phase values can be estimated by a phase unwrapping process which consists of adding integral multiples of 2π to the measured wrapped phase values. In the presence of noise and data undersampling, the phase unwrapping problem becomes non-trivial. In this paper, we investigate the performance of three different phase unwrapping algorithms when applied to three-dimensional (two spatial axes and one time axis) phase contrast datasets. A simple one-dimensional temporal unwrapping algorithm, a more complex and robust three-dimensional unwrapping algorithm and a novel velocity encoding unwrapping algorithm which involves unwrapping along a fourth dimension (the ‘velocity encoding’ direction) are discussed, and results from the three are presented and compared. It is shown that compared to the traditional approach, both dynamic range and signal-to-noise ratio can be increased by a factor of up to five times, which demonstrates considerable promise for a possible eventual clinical implementation. The results are also of direct relevance to users of any other technique delivering time-varying two-dimensional phase images, such as dynamic speckle interferometry and synthetic aperture radar. PMID:16849270
El-Qulity, Said Ali; Mohamed, Ali Wagdy
2016-01-01
This paper proposes a nonlinear integer goal programming model (NIGPM) for solving the general problem of admission capacity planning in a country as a whole. The work aims to satisfy most of the required key objectives of a country related to the enrollment problem for higher education. The system general outlines are developed along with the solution methodology for application to the time horizon in a given plan. The up-to-date data for Saudi Arabia is used as a case study and a novel evolutionary algorithm based on modified differential evolution (DE) algorithm is used to solve the complexity of the NIGPM generated for different goal priorities. The experimental results presented in this paper show their effectiveness in solving the admission capacity for higher education in terms of final solution quality and robustness. PMID:26819583
El-Qulity, Said Ali; Mohamed, Ali Wagdy
2016-01-01
This paper proposes a nonlinear integer goal programming model (NIGPM) for solving the general problem of admission capacity planning in a country as a whole. The work aims to satisfy most of the required key objectives of a country related to the enrollment problem for higher education. The system general outlines are developed along with the solution methodology for application to the time horizon in a given plan. The up-to-date data for Saudi Arabia is used as a case study and a novel evolutionary algorithm based on modified differential evolution (DE) algorithm is used to solve the complexity of the NIGPM generated for different goal priorities. The experimental results presented in this paper show their effectiveness in solving the admission capacity for higher education in terms of final solution quality and robustness. PMID:26819583
Rieger-Fackeldey, Esther; Sindelar, Richard; Jonzon, Anders; Schulze, Andreas; Sedin, Gunnar
2005-01-01
Background Inhibition of phrenic nerve activity (PNA) can be achieved when alveolar ventilation is adequate and when stretching of lung tissue stimulates mechanoreceptors to inhibit inspiratory activity. During mechanical ventilation under different lung conditions, inhibition of PNA can provide a physiological setting at which ventilatory parameters can be compared and related to arterial blood gases and pH. Objective To study lung mechanics and gas exchange at inhibition of PNA during controlled gas ventilation (GV) and during partial liquid ventilation (PLV) before and after lung lavage. Methods Nine anaesthetised, mechanically ventilated young cats (age 3.8 ± 0.5 months, weight 2.3 ± 0.1 kg) (mean ± SD) were studied with stepwise increases in peak inspiratory pressure (PIP) until total inhibition of PNA was attained before lavage (with GV) and after lavage (GV and PLV). Tidal volume (Vt), PIP, oesophageal pressure and arterial blood gases were measured at inhibition of PNA. One way repeated measures analysis of variance and Student Newman Keuls-tests were used for statistical analysis. Results During GV, inhibition of PNA occurred at lower PIP, transpulmonary pressure (Ptp) and Vt before than after lung lavage. After lavage, inhibition of inspiratory activity was achieved at the same PIP, Ptp and Vt during GV and PLV, but occurred at a higher PaCO2 during PLV. After lavage compliance at inhibition was almost the same during GV and PLV and resistance was lower during GV than during PLV. Conclusion Inhibition of inspiratory activity occurs at a higher PaCO2 during PLV than during GV in cats with surfactant-depleted lungs. This could indicate that PLV induces better recruitment of mechanoreceptors than GV. PMID:15748281
Otsuka, Mitsuo; Kawahara, Taisuke; Isaka, Tadao
2016-03-01
This study aimed to clarify the contribution of differences in step length and step rate to sprinting velocity in an athletic race compared with speed training. Nineteen well-trained male and female sprinters volunteered to participate in this study. Sprinting motions were recorded for each sprinter during both 100-m races and speed training (60-, 80-, and 100-m dash from a block start) for 14 days before the race. Repeated-measures analysis of covariance was used to compare the step characteristics and sprinting velocity between race and speed training, adjusted for covariates including race-training differences in the coefficients of restitution of the all-weather track, wind speed, air temperature, and sex. The average sprinting velocity to the 50-m mark was significantly greater in the race than in speed training (8.26 ± 0.22 m·s vs. 8.00 ± 0.70 m·s, p < 0.01). Although no significant difference was seen in the average step length to the 50-m mark between the race and speed training (1.81 ± 0.09 m vs. 1.80 ± 0.09 m, p = 0.065), the average step rate was significantly greater in the race than in speed training (4.56 ± 0.17 Hz vs. 4.46 ± 0.13 Hz, p < 0.01). These findings suggest that sprinters achieve higher sprinting velocity and can run with higher exercise intensity and more rapid motion during a race than during speed training, even if speed training was performed at perceived high intensity. PMID:26907837
Lees, J.R.
1983-01-01
This study was a systematic replication of a study by Stagliano (1981). Additional hypotheses concerning pretest, student major, and student section variance were tested. Achievement in energy knowledge and conservation attitudes attained by (a) lecture-discussion enriched with the Energy-Environment Simulator and (b) lecture-discussion methods of instruction were measured. Energy knowledge was measured on the Energy Knowledge Assessment Test (EKAT), and attitudes were measured on the Youth Energy Survey (YES), the Lecture-discussion simulation (LDS) used a two hour out-of-class activity in debriefing. The population consisted of 142 college student volunteers randomly selected, and assigned to one of two groups of 71 students for each treatment. Stagliano used three groups (n = 35), one group receiving an energy-game treatment. Both studies used the pretest-posttest true experimental design. The present study included 28 hypotheses, eight of which were found to be significant. Stagliano used 12 hypotheses, all of which were rejected. The present study hypothesized that students who received the LDS treatment would obtain significantly higher scores on the EKAT and the YES instruments. Results showed that significance was found (alpha level .05) on the EKAT and also found on the YES total subscale when covaried for effects of pretest, student major, and student section. When covarying the effects of pretest scores only, significance was found on the EKAT. All YES hypotheses were rejected.
Ekinci, Yunus Levent
2016-01-01
This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers. PMID:27610303
ERIC Educational Resources Information Center
Waldron, Chad H.
2008-01-01
The research study examined whether a difference existed between the reading achievement scores of an experimental group and a control group in standardized reading achievement. This difference measured the effect of systematic oral reading fluency instruction with repeated readings. Data from the 4Sight Pennsylvania Benchmark Reading Assessments…
ERIC Educational Resources Information Center
Chudowsky, Naomi; Chudowsky, Victor; Kober, Nancy
2009-01-01
This report is the first in a series of reports describing results from the Center on Education Policy's (CEP's) third annual analysis of state testing data. The report provides an update on student performance at the proficient level of achievement, and for the first time, includes data about student performance at the advanced and basic levels.…
ERIC Educational Resources Information Center
Clune, William H.; White, Paula A.
1992-01-01
Transcript data were analyzed to determine changes in course taking among graduates of high schools including mostly lower achieving students in California, Florida, Missouri, and Pennsylvania, which adopted high graduation requirements in the 1980s. Average credits per student increased in all academic subjects, as did the courses' difficulty…
NASA Astrophysics Data System (ADS)
Chakraborty, Swarnendu Kumar; Goswami, Rajat Subhra; Bhunia, Chandan Tilak; Bhunia, Abhinandan
2016-06-01
Aggressive packet combining (APC) scheme is well-established in literature. Several modifications were studied earlier for improving throughput. In this paper, three new modifications of APC are proposed. The performance of proposed modified APC is studied by simulation and is reported here. A hybrid scheme is proposed here for getting higher throughput and also the disjoint factor is compared among conventional APC with proposed schemes for getting higher throughput.
ERIC Educational Resources Information Center
Briddell, Andrew
2013-01-01
This study of 1,974 fifth grade students investigated potential relationships between writing process-based instruction practices and higher-order thinking measured by a standardized literacy assessment. Writing process is defined as a highly complex, socio-cognitive process that includes: planning, text production, review, metacognition, writing…
ERIC Educational Resources Information Center
Kennedy, Gary J.
2013-01-01
This essay proposes that much of what constitutes the quality of an institution of higher education is the quality of the students attending the institution. This quality, however, is conceptualized to extend beyond that of academic ability. Specifically, three propositions are considered. First, it is proposed that a core construct of student…
2012-01-01
Background The algorithmic approach to guidelines has been introduced and promoted on a large scale since the 1970s. This study aims at comparing the performance of three algorithms for the management of chronic cough in patients with HIV infection, and at reassessing the current position of algorithmic guidelines in clinical decision making through an analysis of accuracy, harm and complexity. Methods Data were collected at the University Hospital of Kigali (CHUK) in a total of 201 HIV-positive hospitalised patients with chronic cough. We simulated management of each patient following the three algorithms. The first was locally tailored by clinicians from CHUK, the second and third were drawn from publications by Médecins sans Frontières (MSF) and the World Health Organisation (WHO). Semantic analysis techniques known as Clinical Algorithm Nosology were used to compare them in terms of complexity and similarity. For each of them, we assessed the sensitivity, delay to diagnosis and hypothetical harm of false positives and false negatives. Results The principal diagnoses were tuberculosis (21%) and pneumocystosis (19%). Sensitivity, representing the proportion of correct diagnoses made by each algorithm, was 95.7%, 88% and 70% for CHUK, MSF and WHO, respectively. Mean time to appropriate management was 1.86 days for CHUK and 3.46 for the MSF algorithm. The CHUK algorithm was the most complex, followed by MSF and WHO. Total harm was by far the highest for the WHO algorithm, followed by MSF and CHUK. Conclusions This study confirms our hypothesis that sensitivity and patient safety (i.e. less expected harm) are proportional to the complexity of algorithms, though increased complexity may make them difficult to use in practice. PMID:22260242
ERIC Educational Resources Information Center
MacKay, Irene Douglas
The purpose of this study was to investigate the relationship between a student's confidence in his computational procedures for each of the four basic arithmetic operations and the student's achievement on computation problems. All of the students in grades 5 through 8 in one school system (a total of 6186 students) were given a questionnaire to…
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Lin, Leo Shih-Chang; Wen, Yean-Fu
2013-01-01
Recent advance in wireless sensor network (WSN) applications such as the Internet of Things (IoT) have attracted a lot of attention. Sensor nodes have to monitor and cooperatively pass their data, such as temperature, sound, pressure, etc. through the network under constrained physical or environmental conditions. The Quality of Service (QoS) is very sensitive to network delays. When resources are constrained and when the number of receivers increases rapidly, how the sensor network can provide good QoS (measured as end-to-end delay) becomes a very critical problem. In this paper; a solution to the wireless sensor network multicasting problem is proposed in which a mathematical model that provides services to accommodate delay fairness for each subscriber is constructed. Granting equal consideration to both network link capacity assignment and routing strategies for each multicast group guarantees the intra-group and inter-group delay fairness of end-to-end delay. Minimizing delay and achieving fairness is ultimately achieved through the Lagrangean Relaxation method and Subgradient Optimization Technique. Test results indicate that the new system runs with greater effectiveness and efficiency. PMID:23493123
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Lin, Leo Shih-Chang; Wen, Yean-Fu
2013-01-01
Recent advance in wireless sensor network (WSN) applications such as the Internet of Things (IoT) have attracted a lot of attention. Sensor nodes have to monitor and cooperatively pass their data, such as temperature, sound, pressure, etc. through the network under constrained physical or environmental conditions. The Quality of Service (QoS) is very sensitive to network delays. When resources are constrained and when the number of receivers increases rapidly, how the sensor network can provide good QoS (measured as end-to-end delay) becomes a very critical problem. In this paper; a solution to the wireless sensor network multicasting problem is proposed in which a mathematical model that provides services to accommodate delay fairness for each subscriber is constructed. Granting equal consideration to both network link capacity assignment and routing strategies for each multicast group guarantees the intra-group and inter-group delay fairness of end-to-end delay. Minimizing delay and achieving fairness is ultimately achieved through the Lagrangean Relaxation method and Subgradient Optimization Technique. Test results indicate that the new system runs with greater effectiveness and efficiency. PMID:23493123
Attractiveness and School Achievement
ERIC Educational Resources Information Center
Salvia, John; And Others
1977-01-01
The purpose of this study was to ascertain the relationship between rated attractiveness and two measures of school performance. Attractive children received significantly higher report cards and, to some degree, higher achievement test scores than their unattractive peers. (Author)
High Rate Pulse Processing Algorithms for Microcalorimeters
NASA Astrophysics Data System (ADS)
Tan, Hui; Breus, Dimitry; Hennig, Wolfgang; Sabourov, Konstantin; Collins, Jeffrey W.; Warburton, William K.; Bertrand Doriese, W.; Ullom, Joel N.; Bacrania, Minesh K.; Hoover, Andrew S.; Rabin, Michael W.
2009-12-01
It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensors can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small in order to maintain good energy resolution, and pulse decay times are normally on the order of milliseconds due to slow thermal relaxation. Therefore, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. For large arrays, however, as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of waveform data to a host computer for post-processing. In this paper, we present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in readout electronics that we are also currently developing, is to achieve sufficiently good energy resolution for most applications while being: a) simple enough to be implemented in the readout electronics; and, b) capable of processing overlapping pulses, and thus achieving much higher output count rates than those achieved by existing algorithms. Details of our algorithms are presented, and their performance is compared to that of the "optimal filter" that is currently the predominantly used pulse processing algorithm in the cryogenic-detector community.
High rate pulse processing algorithms for microcalorimeters
Rabin, Michael; Hoover, Andrew S; Bacrania, Mnesh K; Tan, Hui; Breus, Dimitry; Henning, Wolfgang; Sabourov, Konstantin; Collins, Jeff; Warburton, William K; Dorise, Bertrand; Ullom, Joel N
2009-01-01
It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensor can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small to maintain good energy resolution, and pulse decay times are normally in the order of milliseconds due to slow thermal relaxation. Consequently, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. Large arrays, however, require as much pulse processing as possible to be performed at the front end of the readout electronics to avoid transferring large amounts of waveform data to a host computer for processing. In this paper, they present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in the readout electronics that they are also currently developing, is to achieve sufficiently good energy resolution for most applications while being (a) simple enough to be implemented in the readout electronics and (b) capable of processing overlapping pulses and thus achieving much higher output count rates than the rates that existing algorithms are currently achieving. Details of these algorithms are presented, and their performance was compared to that of the 'optimal filter' that is the dominant pulse processing algorithm in the cryogenic-detector community.
A class of parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.
Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm
NASA Astrophysics Data System (ADS)
Wang, Qimei; Yang, Zhihong; Wang, Yong
In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.
Algorithmic synthesis using Python compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej
2015-09-01
This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
ERIC Educational Resources Information Center
Hartley, Tricia
2009-01-01
National learning and skills policy aims both to build economic prosperity and to achieve social justice. Participation in higher education (HE) has the potential to contribute substantially to both aims. That is why the Campaign for Learning has supported the ambition to increase the proportion of the working-age population with a Level 4…
ERIC Educational Resources Information Center
Walberg, Herbert J.
2010-01-01
For the last half century, higher spending and many modern reforms have failed to raise the achievement of students in the United States to the levels of other economically advanced countries. A possible explanation, says Herbert Walberg, is that much current education theory is ill informed about scientific psychology, often drawing on fads and…
Robust facial expression recognition algorithm based on local metric learning
NASA Astrophysics Data System (ADS)
Jiang, Bin; Jia, Kebin
2016-01-01
In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.
Parallel algorithms for dynamically partitioning unstructured grids
Diniz, P.; Plimpton, S.; Hendrickson, B.; Leland, R.
1994-10-01
Grid partitioning is the method of choice for decomposing a wide variety of computational problems into naturally parallel pieces. In problems where computational load on the grid or the grid itself changes as the simulation progresses, the ability to repartition dynamically and in parallel is attractive for achieving higher performance. We describe three algorithms suitable for parallel dynamic load-balancing which attempt to partition unstructured grids so that computational load is balanced and communication is minimized. The execution time of algorithms and the quality of the partitions they generate are compared to results from serial partitioners for two large grids. The integration of the algorithms into a parallel particle simulation is also briefly discussed.
Ehsan, Shoaib; Kanwal, Nadia; Clark, Adrian F; McDonald-Maier, Klaus D
2012-01-01
Speeded-Up Robust Features is a feature extraction algorithm designed for real-time execution, although this is rarely achievable on low-power hardware such as that in mobile robots. One way to reduce the computation is to discard some of the scale-space octaves, and previous research has simply discarded the higher octaves. This paper shows that this approach is not always the most sensible and presents an algorithm for choosing which octaves to discard based on the properties of the imagery. Results obtained with this best octaves algorithm show that it is able to achieve a significant reduction in computation without compromising matching performance. PMID:21712160
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
Linear-scaling and parallelisable algorithms for stochastic quantum chemistry
NASA Astrophysics Data System (ADS)
Booth, George H.; Smart, Simon D.; Alavi, Ali
2014-07-01
For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-04-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
General cardinality genetic algorithms
Koehler; Bhattacharyya; Vose
1997-01-01
A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767
[Deregulation and Higher Education].
ERIC Educational Resources Information Center
Business Officer, 1982
1982-01-01
The extent to which the Reagan Administration has achieved its deregulation goals in the area of higher education is addressed in three articles: "Deregulation and Higher Education: The View a Year Later" (Sheldon Elliot Steinbach); "Student Financial Aid Deregulation: Rhetoric or Reality?" (Robin E. Jenkins); and "Administration Reform of Civil…
General Structure Design for Fast Image Processing Algorithms Based upon FPGA DSP Slice
NASA Astrophysics Data System (ADS)
Wasfy, Wael; Zheng, Hong
Increasing the speed and accuracy for a fast image processing algorithms during computing the image intensity for low level 3x3 algorithms with different kernel but having the same parallel calculation method is our target to achieve in this paper. FPGA is one of the fastest embedded systems that can be used for implementing the fast image processing image algorithms by using DSP slice module inside the FPGA we aimed to get the advantage of the DSP slice as a faster, accurate, higher number of bits in calculations and different calculated equation maneuver capabilities. Using a higher number of bits during algorithm calculations will lead to a higher accuracy compared with using the same image algorithm calculations with less number of bits, also reducing FPGA resources as minimum as we can and according to algorithm calculations needs is a very important goal to achieve. So in the recommended design we used as minimum DSP slice as we can and as a benefit of using DSP slice is higher calculations accuracy as the DSP capabilities of having 48 bit accuracy in addition and 18 x 18 bit accuracy in multiplication. For proofing the design, Gaussian filter and Sobelx edge detector image processing algorithms have been chosen to be implemented. Also we made a comparison with another design for proofing the improvements of the accuracy and speed of calculations, the other design as will be mentioned later on this paper is using maximum 12 bit accuracy in adding or multiplying calculations.
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Wavelet Algorithms for Illumination Computations
NASA Astrophysics Data System (ADS)
Schroder, Peter
One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.
Modeling Achievement by Measuring the Enacted Instruction
ERIC Educational Resources Information Center
Walkup, John R.; Jones, Ben S.
2008-01-01
This article presents a mathematical algorithm that relates student achievement with directly observable, quantifiable teacher and student behaviors, producing a modified form of the Walberg model. The algorithm (1) expands the measurable factors that comprise the quality of instruction in a linear basis of research-based teaching components and…
Solution algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Whitaker, D. L.; Slack, David C.; Walters, Robert W.
1990-01-01
The objective of the study was to analyze implicit techniques employed in structured grid algorithms for solving two-dimensional Euler equations and extend them to unstructured solvers in order to accelerate convergence rates. A comparison is made between nine different algorithms for both first-order and second-order accurate solutions. Higher-order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The discussion is illustrated by results for flow over a transonic circular arc.
Parental Involvement and Academic Achievement
ERIC Educational Resources Information Center
Goodwin, Sarah Christine
2015-01-01
This research study examined the correlation between student achievement and parent's perceptions of their involvement in their child's schooling. Parent participants completed the Parent Involvement Project Parent Questionnaire. Results slightly indicated parents of students with higher level of achievement perceived less demand or invitations…
Using Strassen's algorithm to accelerate the solution of linear systems
NASA Technical Reports Server (NTRS)
Bailey, David H.; Lee, King; Simon, Horst D.
1990-01-01
Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.
Efficient maximum entropy algorithms for electronic structure
Silver, R.N.; Roeder, H.; Voter, A.F.; Kress, J.D.
1996-04-01
Two Chebyshev recursion methods are presented for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). If limited statistical accuracy and energy resolution are acceptable, they provide linear scaling methods for the calculation of physical properties involving large numbers of eigenstates such as densities of states, spectral functions, thermodynamics, total energies for Monte Carlo simulations and forces for molecular dynamics. KPM provides a uniform approximation to a DOS, with resolution inversely proportional to the number of Chebyshev moments, while MEM can achieve significantly higher, but non-uniform, resolution at the risk of possible artifacts. This paper emphasizes efficient algorithms.
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Measuring and Recording Student Achievement
ERIC Educational Resources Information Center
Universities UK, 2004
2004-01-01
The Measuring and Recording Student Achievement Scoping Group was established by Universities UK and the Standing Conference of Principals (SCOP), with the support of the Higher Education Funding Council for England (HEFCE) in October 2003 to review the recommendations from the UK Government White Paper "The Future of Higher Education" relating…
Steganographic system based on higher-order statistics
NASA Astrophysics Data System (ADS)
Tzschoppe, Roman; Baeuml, Robert; Huber, Johannes; Kaup, Andre
2003-06-01
Universal blind steganalysis attempts to detect steganographic data without knowledge about the applied steganographic system. Farid proposed such a detection algorithm based on higher-order statistics for separating original images from stego images. His method shows an astonishing performance on current steganographic schemes. Starting from the statistical approach in Farid's algorithm, we investigate the well known steganographic tool Jsteg as well as a newer approach proposed by Eggers et al., which relies on histogram-preserving data mapping. Both schemes show weaknesses leading to a certain detectability. Further analysis shows which statistic characteristics make both schemes vulnerable. Based on these results, the histogram preserving approach is enhanced such that it achieves perfect security with respect to Farid's algorithm.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722
Research on algorithm for infrared hyperspectral imaging Fourier transform spectrometer technology
NASA Astrophysics Data System (ADS)
Wan, Lifang; Chen, Yan; Liao, Ningfang; Lv, Hang; He, Shufang; Li, Yasheng
2015-08-01
This paper reported the algorithm for Infrared Hyperspectral Imaging Radiometric Spectrometer Technology. Six different apodization functions are been used and compared, and the phase corrected technologies of Forman is researched and improved, fast fourier transform(FFT)is been used in this paper instead of the linear convolution to reduce the quantity of computation.The interferograms is achieved by the Infrared Hyperspectral Imaging Radiometric Spectrometer which are corrected and rebuilded by the improved algorithm, this algorithm reduce the noise and accelerate the computing speed with the higher accuracy of spectrometers.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Sharing Leadership Responsibilities Results in Achievement Gains
ERIC Educational Resources Information Center
Armistead, Lew
2010-01-01
Collective, not individual, leadership in schools has a greater impact on student achievement; when principals and teachers share leadership responsibilities, student achievement is higher; and schools having high student achievement also display a vision for student achievement and teacher growth. Those are just a few of the insights into school…
Graded Achievement, Tested Achievement, and Validity
ERIC Educational Resources Information Center
Brookhart, Susan M.
2015-01-01
Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…
ERIC Educational Resources Information Center
Hendrickson, Robert M.; Gregory, Dennis E.
Decisions made by federal and state courts during 1983 concerning higher education are reported in this chapter. Issues of employment and the treatment of students underlay the bulk of the litigation. Specific topics addressed in these and other cases included federal authority to enforce regulations against age discrimination and to revoke an…
ERIC Educational Resources Information Center
Hendrickson, Robert M.
Litigation in 1987 was very brisk with an increase in the number of higher education cases reviewed. Cases discussed in this chapter are organized under four major topics: (1) intergovernmental relations; (2) employees, involving discrimination claims, tenured and nontenured faculty, collective bargaining and denial of employee benefits; (3)…
ERIC Educational Resources Information Center
Hendrickson, Robert M.; Finnegan, Dorothy E.
The higher education case law in 1988 is extensive. Cases discussed in this chapter are organized under five major topics: (1) intergovernmental relations; (2) employees, involving discrimination claims, tenured and nontenured faculty, collective bargaining, and denial of employee benefits; (3) students, involving admissions, financial aid, First…
ERIC Educational Resources Information Center
Hendrickson, Robert M.
This eighth chapter of "The Yearbook of School Law, 1986" summarizes and analyzes over 330 state and federal court cases litigated in 1985 in which institutions of higher education were involved. Among the topics examined were relationships between postsecondary institutions and various governmental agencies; discrimination in the employment of…
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Speckle reduction via higher order total variation approach.
Wensen Feng; Hong Lei; Yang Gao
2014-04-01
Multiplicative noise (also known as speckle) reduction is a prerequisite for many image-processing tasks in coherent imaging systems, such as the synthetic aperture radar. One approach extensively used in this area is based on total variation (TV) regularization, which can recover significantly sharp edges of an image, but suffers from the staircase-like artifacts. In order to overcome the undesirable deficiency, we propose two novel models for removing multiplicative noise based on total generalized variation (TGV) penalty. The TGV regularization has been mathematically proven to be able to eliminate the staircasing artifacts by being aware of higher order smoothness. Furthermore, an efficient algorithm is developed for solving the TGV-based optimization problems. Numerical experiments demonstrate that our proposed methods achieve state-of-the-art results, both visually and quantitatively. In particular, when the image has some higher order smoothness, our methods outperform the TV-based algorithms. PMID:24808350
Parallel algorithms and architectures for the manipulator inertia matrix
Amin-Javaheri, M.
1989-01-01
Several parallel algorithms and architectures to compute the manipulator inertia matrix in real time are proposed. An O(N) and an O(log{sub 2}N) parallel algorithm based upon recursive computation of the inertial parameters of sets of composite rigid bodies are formulated. One- and two-dimensional systolic architectures are presented to implement the O(N) parallel algorithm. A cube architecture is employed to implement the diagonal element of the inertia matrix in O(log{sub 2}N) time and the upper off-diagonal elements in O(N) time. The resulting K{sub 1}O(N) + K{sub 2}O(log{sub 2}N) parallel algorithm is more efficient for a cube network implementation. All the architectural configurations are based upon a VLSI Robotics Processor exploiting fine-grain parallelism. In evaluation all the architectural configurations, significant performance parameters such as I/O time and idle time due to processor synchronization as well as CPU utilization and on-chip memory size are fully included. The O(N) and O(log{sub 2}N) parallel algorithms adhere to the precedence relationships among the processors. In order to achieve a higher speedup factor; however, parallel algorithms in conjunction with Non-Strict Computational Models are devised to relax interprocess precedence, and as a result, to decrease the effective computational delays. The effectiveness of the Non-strict Computational Algorithms is verified by computer simulations, based on a PUMA 560 robot manipulator. It is demonstrated that a combination of parallel algorithms and architectures results in a very effective approach to achieve real-time response for computing the manipulator inertia matrix.
Pedestrian navigation algorithm based on MIMU with building heading/magnetometer
NASA Astrophysics Data System (ADS)
Meng, Xiang-bin; Pan, Xian-fei; Chen, Chang-hao; Hu, Xiao-ping
2016-01-01
In order to improve the accuracy of the low-cost MIMU Inertial navigation system in the application of pedestrian navigation.And to reduce the effect of the heading error because of the low accuracy of the component of MIMU. A novel algorithm was put forward, which fusing the building heading constraint information and the magnetic heading information to achieve more advantages. We analysed the application condition and the modified effect of building heading and magnetic heading. Then experiments were conducted in indoor environment. The results show that the algorithm proposed has a better effect to restrict the heading drift problem and to achieve a higher navigation precision.
Quantum defragmentation algorithm
Burgarth, Daniel; Giovannetti, Vittorio
2010-08-15
In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.
Achieving Communicative Competence: The Role of Higher Education.
ERIC Educational Resources Information Center
Fatt, James Poon Teng
1991-01-01
A study investigated the communicative competencies required in English as a Second Language (ESL) by 200 business and accounting students at Nanyang Technological Institute (Singapore) and explored a communicatively based ESL curriculum design. Student attitudes about the current linguistically based ESL program were also examined. Results are…
Time Management and Academic Achievement of Higher Secondary Students
ERIC Educational Resources Information Center
Cyril, A. Vences
2015-01-01
The only thing, which can't be changed by man, is time. One cannot get back time lost or gone Nothing can be substituted for time. Time management is actually self management. The skills that people need to manage others are the same skills that are required to manage themselves. The purpose of the present study was to explore the relation between…
Higher Order Thinking Skills: Challenging All Students to Achieve
ERIC Educational Resources Information Center
Williams, R. Bruce
2007-01-01
Explicit instruction in thinking skills must be a priority goal of all teachers. In this book, the author presents a framework of the five Rs: Relevancy, Richness, Relatedness, Rigor, and Recursiveness. The framework serves to illuminate instruction in critical and creative thinking skills for K-12 teachers across content areas. Each chapter…
Middle Grades: Quality Teaching Equals Higher Student Achievement. Research Brief
ERIC Educational Resources Information Center
Bottoms, Gene; Hertl, Jordan; Mollette, Melinda; Patterson, Lenora
2014-01-01
The middles grades are critical to public school systems and our nation's economy. It's the make-or-break point in students' futures. Studies repeatedly show when students are not engaged and lose interest in the middle grades, they are likely to fall behind in ninth grade and later drop out of school. When this happens, the workforce suffers, and…
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
A novel image encryption algorithm using chaos and reversible cellular automata
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Luan, Dapeng
2013-11-01
In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.
Analysis and an image recovery algorithm for ultrasonic tomography system
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1994-01-01
The problem of an ultrasonic reflectivity tomography is similar to that of a spotlight-mode aircraft Synthetic Aperture Radar (SAR) system. The analysis for a circular path spotlight mode SAR in this paper leads to the insight of the system characteristics. It indicates that such a system when operated in a wide bandwidth is capable of achieving the ultimate resolution; one quarter of the wavelength of the carrier frequency. An efficient processing algorithm based on the exact two dimensional spectrum is presented. The results of simulation indicate that the impulse responses meet the predicted resolution performance. Compared to an algorithm previously developed for the ultrasonic reflectivity tomography, the throughput rate of this algorithm is about ten times higher.
An ellipse detection algorithm based on edge classification
NASA Astrophysics Data System (ADS)
Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan
2015-12-01
In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
NASA Astrophysics Data System (ADS)
El-Guibaly, Fayez; Sabaa, A.
1996-10-01
In this paper, we introduce modifications on the classic CORDIC algorithm to reduce the number of iterations, and hence the rounding noise. The modified algorithm needs, at most, half the number of iterations to achieve the same accuracy as the classical one. The modifications are applicable to linear, circular and hyperbolic CORDIC in both vectoring and rotation modes. Simulations illustrate the effect of the new modifications.
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
Network representations of knowledge about chemical equilibrium: Variations with achievement
NASA Astrophysics Data System (ADS)
Wilson, Janice M.
This study examined variation in the organization of domain-specific knowledge by 50 Year-12 chemistry students and 4 chemistry teachers. The study used nonmetric multidimensional scaling (MDS) and the Pathfinder network-generating algorithm to investigate individual and group differences in student concepts maps about chemical equilibrium. MDS was used to represent the individual maps in two-dimensional space, based on the presence or absence of paired propositional links. The resulting separation between maps reflected degree of hierarchical structure, but also reflected independent measures of student achievement. Pathfinder was then used to produce semantic networks from pooled data from high and low achievement groups using proximity matrices derived from the frequencies of paired concepts. The network constructed from maps of higher achievers (coherence measure = 0.18, linked pairs = 294, and number of subjects = 32) showed greater coherence, more concordance in specific paired links, more important specific conceptual relationships, and greater hierarchical organization than did the network constructed from maps of lower achievers (coherence measure = 0.12, linked pairs = 552, and number of subjects = 22). These differences are interpreted in terms of qualitative variation in knowledge organization by two groups of individuals with different levels of relative expertise (as reflected in achievement scores) concerning the topic of chemical equilibrium. The results suggest that the technique of transforming paired links in concept maps into proximity matrices for input to multivariate analyses provides a suitable methodology for comparing and documenting changes in the organization and structure of conceptual knowledge within and between individual students.
Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics
NASA Technical Reports Server (NTRS)
Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.
2001-01-01
An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.
Algorithms, games, and evolution
Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh
2014-01-01
Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793
Image enhancement based on edge boosting algorithm
NASA Astrophysics Data System (ADS)
Ngernplubpla, Jaturon; Chitsobhuk, Orachat
2015-12-01
In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to
Comparing Science Achievement Constructs: Targeted and Achieved
ERIC Educational Resources Information Center
Ferrara, Steve; Duncan, Teresa
2011-01-01
This article illustrates how test specifications based solely on academic content standards, without attention to other cognitive skills and item response demands, can fall short of their targeted constructs. First, the authors inductively describe the science achievement construct represented by a statewide sixth-grade science proficiency test.…
Achieving energy efficiency during collective communications
Sundriyal, Vaibhav; Sosonkina, Masha; Zhang, Zhao
2012-09-13
Energy consumption has become a major design constraint in modern computing systems. With the advent of petaflops architectures, power-efficient software stacks have become imperative for scalability. Techniques such as dynamic voltage and frequency scaling (called DVFS) and CPU clock modulation (called throttling) are often used to reduce the power consumption of the compute nodes. To avoid significant performance losses, these techniques should be used judiciously during parallel application execution. For example, its communication phases may be good candidates to apply the DVFS and CPU throttling without incurring a considerable performance loss. They are often considered as indivisible operations although little attention is being devoted to the energy saving potential of their algorithmic steps. In this work, two important collective communication operations, all-to-all and allgather, are investigated as to their augmentation with energy saving strategies on the per-call basis. The experiments prove the viability of such a fine-grain approach. They also validate a theoretical power consumption estimate for multicore nodes proposed here. While keeping the performance loss low, the obtained energy savings were always significantly higher than those achieved when DVFS or throttling were switched on across the entire application run
Varieties of Achievement Motivation.
ERIC Educational Resources Information Center
Kukla, Andre; Scher, Hal
1986-01-01
A recent article by Nicholls on achievement motivation is criticized on three points: (1) definitions of achievement motives are ambiguous; (2) behavioral consequences predicted do not follow from explicit theoretical assumptions; and (3) Nicholls's account of the relation between his theory and other achievement theories is factually incorrect.…
Motivation and School Achievement.
ERIC Educational Resources Information Center
Maehr, Martin L.; Archer, Jennifer
Addressing the question, "What can be done to promote school achievement?", this paper summarizes the literature on motivation relating to classroom achievement and school effectiveness. Particular attention is given to how values, ideology, and various cultural patterns impinge on classroom performance and serve to enhance motivation to achieve.…
Mobility and Reading Achievement.
ERIC Educational Resources Information Center
Waters, Theresa Z.
A study examined the effect of geographic mobility on elementary school students' achievement. Although such mobility, which requires students to make multiple moves among schools, can have a negative impact on academic achievement, the hypothesis for the study was that it was not a determining factor in reading achievement test scores. Subjects…
ERIC Educational Resources Information Center
Kirby, John R.
Two studies examined the effectiveness of the PASS (Planning, Attention, Simultaneous, and Successive cognitive processes) theory of intelligence in predicting reading achievement scores of normally achieving children and distinguishing children with reading disabilities from normally achieving children. The first study dealt with predicting…
Strategic Planning for Higher Education.
ERIC Educational Resources Information Center
Kotler, Philip; Murphy, Patrick E.
1981-01-01
The framework necessary for achieving a strategic planning posture in higher education is outlined. The most important benefit of strategic planning for higher education decision makers is that it forces them to undertake a more market-oriented and systematic approach to long- range planning. (Author/MLW)
Acceleration of iterative image restoration algorithms.
Biggs, D S; Andrews, M
1997-03-10
A new technique for the acceleration of iterative image restoration algorithms is proposed. The method is based on the principles of vector extrapolation and does not require the minimization of a cost function. The algorithm is derived and its performance illustrated with Richardson-Lucy (R-L) and maximum entropy (ME) deconvolution algorithms and the Gerchberg-Saxton magnitude and phase retrieval algorithms. Considerable reduction in restoration times is achieved with little image distortion or computational overhead per iteration. The speedup achieved is shown to increase with the number of iterations performed and is easily adapted to suit different algorithms. An example R-L restoration achieves an average speedup of 40 times after 250 iterations and an ME method 20 times after only 50 iterations. An expression for estimating the acceleration factor is derived and confirmed experimentally. Comparisons with other acceleration techniques in the literature reveal significant improvements in speed and stability. PMID:18250863
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo
2015-05-01
An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.
Efficient implementation of Jacobi algorithms and Jacobi sets on distributed memory architectures
Eberlein, P.J. ); Park, H. )
1990-04-01
One-sided methods for implementing Jacobi diagonalization algorithms have been recently proposed for both distributed memory and vector machines. These methods are naturally well suited to distributed memory and vector architectures because of their inherent parallelism and their abundance of vector operations. Also, one-sided methods require substantially less message passing than the two-sided methods, and thus can achieve higher efficiency. The authors describe in detail the use of the one-sided Jacobi rotation as opposed to the rotation used in the Hestenes algorithm; they perceive the difference to have been widely misunderstood. Furthermore, the one-sided algorithm generalizes to other problems such as the nonsymmetric eigenvalue problem while the Hestenes algorithm does not. The authors discuss two new implementations for Jacobi sets for a ring connected array of processors and show their isomorphism to the round-robin ordering.
Excursion-Set-Mediated Genetic Algorithm
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.
Parallel Algorithm Solves Coupled Differential Equations
NASA Technical Reports Server (NTRS)
Hayashi, A.
1987-01-01
Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
A Synthesized Heuristic Task Scheduling Algorithm
Dai, Yanyan; Zhang, Xiangli
2014-01-01
Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
GPUs benchmarking in subpixel image registration algorithm
NASA Astrophysics Data System (ADS)
Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier
2015-05-01
Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.
Higher order stationary subspace analysis
NASA Astrophysics Data System (ADS)
Panknin, Danny; von Bünau, Paul; Kawanabe, Motoaki; Meinecke, Frank C.; Müller, Klaus-Robert
2016-03-01
Non-stationarity in data is an ubiquitous problem in signal processing. The recent stationary subspace analysis procedure (SSA) has enabled to decompose such data into a stationary subspace and a non-stationary part respectively. Algorithmically only weak non- stationarities could be tackled by SSA. The present paper takes the conceptual step generalizing from the use of first and second moments as in SSA to higher order moments, thus defining the proposed higher order stationary subspace analysis procedure (HOSSA). The paper derives the novel procedure and shows simulations. An obvious trade-off between the necessity of estimating higher moments and the accuracy and robustness with which they can be estimated is observed. In an ideal setting of plenty of data where higher moment information is dominating our novel approach can win against standard SSA. However, with limited data, even though higher moments actually dominate the underlying data, still SSA may arrive on par.
Efficient implementations of hyperspectral chemical-detection algorithms
NASA Astrophysics Data System (ADS)
Brett, Cory J. C.; DiPietro, Robert S.; Manolakis, Dimitris G.; Ingle, Vinay K.
2013-10-01
Many military and civilian applications depend on the ability to remotely sense chemical clouds using hyperspectral imagers, from detecting small but lethal concentrations of chemical warfare agents to mapping plumes in the aftermath of natural disasters. Real-time operation is critical in these applications but becomes diffcult to achieve as the number of chemicals we search for increases. In this paper, we present efficient CPU and GPU implementations of matched-filter based algorithms so that real-time operation can be maintained with higher chemical-signature counts. The optimized C++ implementations show between 3x and 9x speedup over vectorized MATLAB implementations.
Heritability of Creative Achievement
ERIC Educational Resources Information Center
Piffer, Davide; Hur, Yoon-Mi
2014-01-01
Although creative achievement is a subject of much attention to lay people, the origin of individual differences in creative accomplishments remain poorly understood. This study examined genetic and environmental influences on creative achievement in an adult sample of 338 twins (mean age = 26.3 years; SD = 6.6 years). Twins completed the Creative…
Confronting the Achievement Gap
ERIC Educational Resources Information Center
Gardner, David
2007-01-01
This article talks about the large achievement gap between children of color and their white peers. The reasons for the achievement gap are varied. First, many urban minorities come from a background of poverty. One of the detrimental effects of growing up in poverty is receiving inadequate nourishment at a time when bodies and brains are rapidly…
States Address Achievement Gaps.
ERIC Educational Resources Information Center
Christie, Kathy
2002-01-01
Summarizes 2 state initiatives to address the achievement gap: North Carolina's report by the Advisory Commission on Raising Achievement and Closing Gaps, containing an 11-point strategy, and Kentucky's legislation putting in place 10 specific processes. The North Carolina report is available at www.dpi.state.nc.us.closingthegap; Kentucky's…
Wechsler Individual Achievement Test.
ERIC Educational Resources Information Center
Taylor, Ronald L.
1999-01-01
This article describes the Wechsler Individual Achievement Test, a comprehensive measure of achievement for individuals in grades K-12. Eight subtests assess mathematics reasoning, spelling, reading comprehension, numerical operations, listening comprehension, oral expression, and written expression. Its administration, standardization,…
Inverting the Achievement Pyramid
ERIC Educational Resources Information Center
White-Hood, Marian; Shindel, Melissa
2006-01-01
Attempting to invert the pyramid to improve student achievement and increase all students' chances for success is not a new endeavor. For decades, educators have strategized, formed think tanks, and developed school improvement teams to find better ways to improve the achievement of all students. Currently, the No Child Left Behind Act (NCLB) is…
ERIC Educational Resources Information Center
Ohio State Dept. of Education, Columbus. Trade and Industrial Education Service.
The Ohio Trade and Industrial Education Achievement Test battery is comprised of seven basic achievement tests: Machine Trades, Automotive Mechanics, Basic Electricity, Basic Electronics, Mechanical Drafting, Printing, and Sheet Metal. The tests were developed by subject matter committees and specialists in testing and research. The Ohio Trade and…
General Achievement Trends: Maryland
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Arkansas
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Idaho
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Nebraska
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Colorado
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Iowa
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Hawaii
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Kentucky
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Florida
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Texas
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Oregon
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Virginia
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
ERIC Educational Resources Information Center
Education Digest: Essential Readings Condensed for Quick Review, 2004
2004-01-01
Is the concept of "honor roll" obsolete? The honor roll has always been a way for schools to recognize the academic achievement of their students. But does it motivate students? In this article, several elementary school principals share their views about honoring student achievement. Among others, Virginia principal Nancy Moga said that students…
ERIC Educational Resources Information Center
Martinez, Paul
The Raising Quality and Achievement Program is a 3-year initiative to support further education (FE) colleges in the United Kingdom in their drive to improve students' achievement and the quality of provision. The program offers the following: (1) quality information and advice; (2) onsite support for individual colleges; (3) help with…
Achieving Perspective Transformation.
ERIC Educational Resources Information Center
Nowak, Jens
Perspective transformation is a consciously achieved state in which the individual's perspective on life is transformed. The new perspective serves as a vantage point for life's actions and interactions, affecting the way life is lived. Three conditions are basic to achieving perspective transformation: (1) "feeling" experience, i.e., getting in…
ERIC Educational Resources Information Center
Abowitz, Kathleen Knight
2011-01-01
Public schools are functionally provided through structural arrangements such as government funding, but public schools are achieved in substance, in part, through local governance. In this essay, Kathleen Knight Abowitz explains the bifocal nature of achieving public schools; that is, that schools are both subject to the unitary Public compact of…
General Achievement Trends: Tennessee
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
ERIC Educational Resources Information Center
Fletcher, Mike; And Others
1992-01-01
This collection of seven articles examines achievement-based resourcing (ABR), the concept that the funding of educational institutions should be linked to their success in promoting student achievement, with a focus on the application of ABR to postsecondary education in the United Kingdom. The articles include: (1) "Introduction" (Mick…
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Vectorization of algorithms for solving systems of difference equations
Buzbee, B.L.
1981-01-01
Today's fastest computers achieve their highest level of performance when processing vectors. Consequently, considerable effort has been spent in the past decade developing algorithms that can be expressed as operations on vectors. In this paper two types of vector architecture are defined. A discussion is presented on the variation of performance that can occur on a vector processor as a function of algorithm and implementation, the consequences of this variation, and the performance of some basic operators on the two classes of vector architecture. Also discussed is the performance of higher-level operators, including some that should be used with caution. With both types of operators, the implementation of techniques for solving systems of difference equations is discussed. Included are fast Poisson solvers and point, line, and conjugate-gradient techniques. 1 figure.
Multithreaded Algorithms for Graph Coloring
Catalyurek, Umit V.; Feo, John T.; Gebremedhin, Assefaw H.; Halappanavar, Mahantesh; Pothen, Alex
2012-10-21
Graph algorithms are challenging to parallelize when high performance and scalability are primary goals. Low concurrency, poor data locality, irregular access pattern, and high data access to computation ratio are among the chief reasons for the challenge. The performance implication of these features is exasperated on distributed memory machines. More success is being achieved on shared-memory, multi-core architectures supporting multithreading. We consider a prototypical graph problem, coloring, and show how a greedy algorithm for solving it can be e*ectively parallelized on multithreaded architectures. We present in particular two di*erent parallel algorithms. The first relies on speculation and iteration, and is suitable for any shared-memory, multithreaded system. The second uses data ow principles and is targeted at the massively multithreaded Cray XMT system. We benchmark the algorithms on three di*erent platforms and demonstrate scalable runtime performance. In terms of quality of solution, both algorithms use nearly the same number of colors as the serial algorithm.
A variable step-size NLMS algorithm employing partial update schemes for echo cancellation
NASA Astrophysics Data System (ADS)
Xu, Li; Ju, Yongfeng
2011-02-01
Today, with increase in the demand for higher quality communication, a kind of long adaptive filter is frequently encountered in practical application, such as the acoustic echo cancellation. Increase of adaptive filter length from decades to hundreds or thousands causes the conventional adaptive algorithms encounter new challenges. Therefore, a new variable step-size normalized least-mean-square algorithm combined with Partial update is proposed and its performances are investigated through simulations. The proposed step size method takes into account the instantaneous value of the output error and provides a trade-off between the convergence rate and the steady-state coefficient error. In order to deal with this obstacle that the large number of filter coefficients diminishes the usefulness of the adaptive filtering algorithm owing to increased complexity, the new algorithm employing tap-selection partial update schemes only updates subset of the filter coefficients that correspond to the largest magnitude elements of the regression vector. Simulation results of such applications in acoustic echo cancellation verify that the proposed algorithm achieves higher rate of convergence and brings significant computation savings compared to the NLMS algorithm.
Cognitive Style, Operativity, and Reading Achievement.
ERIC Educational Resources Information Center
Roberge, James J.; Flexer, Barbara K.
1984-01-01
This developmental study was designed to examine the effects of field dependence-independence and level of operational development on the reading achievement of sixth, seventh, and eighth graders. Field dependence-independence had no significant effect on reading achievement, but high-operational students scored significantly higher than…
Mathematics Coursework Regulates Growth in Mathematics Achievement
ERIC Educational Resources Information Center
Ma, Xin; Wilkins, Jesse L. M.
2007-01-01
Using data from the Longitudinal Study of American Youth (LSAY), we examined the extent to which students' mathematics coursework regulates (influences) the rate of growth in mathematics achievement during middle and high school. Graphical analysis showed that students who started middle school with higher achievement took individual mathematics…
Schooling and Achievement in American Society.
ERIC Educational Resources Information Center
Sewell, William H.; And Others
This book is an outgrowth of an interdisciplinary seminar on achievement processes. The 15 chapters of this book are distributed into three substantive sections. Part One includes a series of chapters dealing in one way or another with achievement in the life cycle. One chapter discusses the causes and consequences of higher education and…
Maryland's Achievements in Public Education, 2011
ERIC Educational Resources Information Center
Maryland State Department of Education, 2011
2011-01-01
This report presents Maryland's achievements in public education for 2011. Maryland's achievements include: (1) Maryland's public schools again ranked #1 in the nation in Education Week's 2011 Quality Counts annual report; (2) Maryland ranked 1st nationwide for a 3rd year in a row in the percentage of public school students scoring 3 or higher on…
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
[Achievement of therapeutic objectives].
Mantilla, Teresa
2014-07-01
Therapeutic objectives for patients with atherogenic dyslipidemia are achieved by improving patient compliance and adherence. Clinical practice guidelines address the importance of treatment compliance for achieving objectives. The combination of a fixed dose of pravastatin and fenofibrate increases the adherence by simplifying the drug regimen and reducing the number of daily doses. The good tolerance, the cost of the combination and the possibility of adjusting the administration to the patient's lifestyle helps achieve the objectives for these patients with high cardiovascular risk. PMID:25043543
Dynamic hybrid algorithms for MAP inference in discrete MRFs.
Alahari, Karteek; Kohli, Pushmeet; Torr, Philip H S
2010-10-01
In this paper, we present novel techniques that improve the computational and memory efficiency of algorithms for solving multilabel energy functions arising from discrete mrfs or crfs. These methods are motivated by the observations that the performance of minimization algorithms depends on: 1) the initialization used for the primal and dual variables and 2) the number of primal variables involved in the energy function. Our first method (dynamic alpha-expansion) works by "recycling" results from previous problem instances. The second method simplifies the energy function by "reducing" the number of unknown variables present in the problem. Further, we show that it can also be used to generate a good initialization for the dynamic alpha-expansion algorithm by "reusing" dual variables. We test the performance of our methods on energy functions encountered in the problems of stereo matching and color and object-based segmentation. Experimental results show that our methods achieve a substantial improvement in the performance of alpha-expansion, as well as other popular algorithms such as sequential tree-reweighted message passing and max-product belief propagation. We also demonstrate the applicability of our schemes for certain higher order energy functions, such as the one described in [1], for interactive texture-based image and video segmentation. In most cases, we achieve a 10-15 times speed-up in the computation time. Our modified alpha-expansion algorithm provides similar performance to Fast-PD, but is conceptually much simpler. Both alpha-expansion and Fast-PD can be made orders of magnitude faster when used in conjunction with the "reduce" scheme proposed in this paper. PMID:20724761
Image watermarking using a dynamically weighted fuzzy c-means algorithm
NASA Astrophysics Data System (ADS)
Kang, Myeongsu; Ho, Linh Tran; Kim, Yongmin; Kim, Cheol Hong; Kim, Jong-Myon
2011-10-01
Digital watermarking has received extensive attention as a new method of protecting multimedia content from unauthorized copying. In this paper, we present a nonblind watermarking system using a proposed dynamically weighted fuzzy c-means (DWFCM) technique combined with discrete wavelet transform (DWT), discrete cosine transform (DCT), and singular value decomposition (SVD) techniques for copyright protection. The proposed scheme efficiently selects blocks in which the watermark is embedded using new membership values of DWFCM as the embedding strength. We evaluated the proposed algorithm in terms of robustness against various watermarking attacks and imperceptibility compared to other algorithms [DWT-DCT-based and DCT- fuzzy c-means (FCM)-based algorithms]. Experimental results indicate that the proposed algorithm outperforms other algorithms in terms of robustness against several types of attacks, such as noise addition (Gaussian noise, salt and pepper noise), rotation, Gaussian low-pass filtering, mean filtering, median filtering, Gaussian blur, image sharpening, histogram equalization, and JPEG compression. In addition, the proposed algorithm achieves higher values of peak signal-to-noise ratio (approximately 49 dB) and lower values of measure-singular value decomposition (5.8 to 6.6) than other algorithms.
Predicting Achievement and Motivation.
ERIC Educational Resources Information Center
Uguroglu, Margaret; Walberg, Herbert J.
1986-01-01
Motivation and nine other factors were measured for 970 students in grades five through eight in a study of factors predicting achievement and predicting motivation. Results are discussed. (Author/MT)
Student Achievement and Motivation
ERIC Educational Resources Information Center
Flammer, Gordon H.; Mecham, Robert C.
1974-01-01
Compares the lecture and self-paced methods of instruction on the basis of student motivation and achieveme nt, comparing motivating and demotivating factors in each, and their potential for motivation and achievement. (Authors/JR)
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
NASA Astrophysics Data System (ADS)
Owen, Mark W.; Stubberud, Allen R.
2003-12-01
Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.
NASA Astrophysics Data System (ADS)
Owen, Mark W.; Stubberud, Allen R.
2004-01-01
Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.
Feedback algorithm for simulation of multi-segmented cracks
Chady, T.; Napierala, L.
2011-06-23
In this paper, a method for obtaining a three dimensional crack model from a radiographic image is discussed. A genetic algorithm aiming at close simulation of crack's shape is presented. Results obtained with genetic algorithm are compared to those achieved in authors' previous work. The described algorithm has been tested on both simulated and real-life cracks.
An Experimental Method for the Active Learning of Greedy Algorithms
ERIC Educational Resources Information Center
Velazquez-Iturbide, J. Angel
2013-01-01
Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…
Cognitive Style, Operativity, and Mathematics Achievement.
ERIC Educational Resources Information Center
Roberge, James J.; Flexer, Barbara K.
1983-01-01
This study examined the effects of field dependence/independence and the level of operational development on the mathematics achievement of 450 students in grades 6-8. Field-independent students scored significantly higher on total mathematics, concepts, and problem-solving tests. High-operational students scored significantly higher on all tests.…
CORDIC Algorithms: Theory And Extensions
NASA Astrophysics Data System (ADS)
Delosme, Jean-Marc
1989-11-01
Optimum algorithms for signal processing are notoriously costly to implement since they usually require intensive linear algebra operations to be performed at very high rates. In these cases a cost-effective solution is to design a pipelined or parallel architecture with special-purpose VLSI processors. One may often lower the hardware cost of such a dedicated architecture by using processors that implement CORDIC-like arithmetic algorithms. Indeed, with CORDIC algorithms, the evaluation and the application of an operation, such as determining a rotation that brings a vector onto another one and rotating other vectors by that amount, require the same time on identical processors and can be fully overlapped in most cases, thus leading to highly efficient implementations. We have shown earlier that a necessary condition for a CORDIC-type algorithm to exist is that the function to be implemented can be represented in terms of a matrix exponential. This paper refines this condition to the ability to represent , the desired function in terms of a rational representation of a matrix exponential. This insight gives us a powerful tool for the design of new CORDIC algorithms. This is demonstrated by rederiving classical CORDIC algorithms and introducing several new ones, for Jacobi rotations, three and higher dimensional rotations, etc.
Testosterone and Occupational Achievement.
ERIC Educational Resources Information Center
Dabbs, James M., Jr.
1992-01-01
Archival data on 4,462 military veterans linked higher levels of serum testosterone to lower-status occupations. A structural equation model was supported in which higher testosterone, mediated through lower intellectual ability, greater antisocial behavior, and lower education, leads away from white-collar occupations. Contains 49 references.…
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
2011 Higher Education Sustainability Review
ERIC Educational Resources Information Center
Wagner, Margo, Ed.
2012-01-01
Looking through the lens of AASHE Bulletin stories in 2011, this year's review reveals an increased focus on higher education access, affordability, and success; more green building efforts than ever before; and growing campus-community engagement on food security, among many other achievements. Contributors include James Applegate (Lumina…
An Innovative Thinking-Based Intelligent Information Fusion Algorithm
Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699
Fast three-step phase-shifting algorithm
Huang, Peisen S.; Zhang Song
2006-07-20
We propose a new three-step phase-shifting algorithm, which is much faster than the traditional three-step algorithm. We achieve the speed advantage by using a simple intensity ratio function to replace the arc tangent function in the traditional algorithm. The phase error caused by this new algorithm is compensated for by use of a lookup table. Our experimental result sshow that both the new algorithm and the traditional algorithm generate similar results, but the new algorithm is 3.4 times faster. By implementing this new algorithm in a high-resolution, real-time three-dimensional shape measurement system,we were able to achieve a measurement speed of 40 frames per second ata resolution of 532x500 pixels, all with an ordinary personal computer.
POSE Algorithms for Automated Docking
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.
2011-01-01
POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.
Benchmarking image fusion algorithm performance
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2012-06-01
Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Explorations in achievement motivation
NASA Technical Reports Server (NTRS)
Helmreich, Robert L.
1982-01-01
Recent research on the nature of achievement motivation is reviewed. A three-factor model of intrinsic motives is presented and related to various criteria of performance, job satisfaction and leisure activities. The relationships between intrinsic and extrinsic motives are discussed. Needed areas for future research are described.
Achieving health care affordability.
Payson, Norman C
2002-10-01
Not all plans are jumping headlong into the consumer-centric arena. In this article, the CEO of Oxford Health Plans discusses how advanced managed care can achieve what other consumer-centric programs seek to do--provide affordable, quality health care. PMID:12391815
Issues in Achievement Testing.
ERIC Educational Resources Information Center
Baker, Eva L.
This booklet is intended to help school personnel, parents, students, and members of the community understand concepts and research relating to achievement testing in public schools. The paper's sections include: (1) test use with direct effects on students (test of certification, selection, and placement); (2) test use with indirect effects on…
Achieving Peace through Education.
ERIC Educational Resources Information Center
Clarken, Rodney H.
While it is generally agreed that peace is desirable, there are barriers to achieving a peaceful world. These barriers are classified into three major areas: (1) an erroneous view of human nature; (2) injustice; and (3) fear of world unity. In a discussion of these barriers, it is noted that although the consciousness and conscience of the world…
Intelligence and Educational Achievement
ERIC Educational Resources Information Center
Deary, Ian J.; Strand, Steve; Smith, Pauline; Fernandes, Cres
2007-01-01
This 5-year prospective longitudinal study of 70,000+ English children examined the association between psychometric intelligence at age 11 years and educational achievement in national examinations in 25 academic subjects at age 16. The correlation between a latent intelligence trait (Spearman's "g"from CAT2E) and a latent trait of educational…
SALT and Spelling Achievement.
ERIC Educational Resources Information Center
Nelson, Joan
A study investigated the effects of suggestopedic accelerative learning and teaching (SALT) on the spelling achievement, attitudes toward school, and memory skills of fourth-grade students. Subjects were 20 male and 28 female students from two self-contained classrooms at Kennedy Elementary School in Rexburg, Idaho. The control classroom and the…
ERIC Educational Resources Information Center
Bracey, Gerald W.
2008-01-01
In his "Wall Street Journal" op-ed on the 25th of anniversary of "A Nation At Risk", former assistant secretary of education Chester E. Finn Jr. applauded the report for turning U.S. education away from equality and toward achievement. It was not surprising, then, that in mid-2008, Finn arranged a conference to examine the potential "Robin Hood…
INTELLIGENCE, PERSONALITY AND ACHIEVEMENT.
ERIC Educational Resources Information Center
MUIR, R.C.; AND OTHERS
A LONGITUDINAL DEVELOPMENTAL STUDY OF A GROUP OF MIDDLE CLASS CHILDREN IS DESCRIBED, WITH EMPHASIS ON A SEGMENT OF THE RESEARCH INVESTIGATING THE RELATIONSHIP OF ACHIEVEMENT, INTELLIGENCE, AND EMOTIONAL DISTURBANCE. THE SUBJECTS WERE 105 CHILDREN AGED FIVE TO 6.3 ATTENDING TWO SCHOOLS IN MONTREAL. EACH CHILD WAS ASSESSED IN THE AREAS OF…
School Students' Science Achievement
ERIC Educational Resources Information Center
Shymansky, James; Wang, Tzu-Ling; Annetta, Leonard; Everett, Susan; Yore, Larry D.
2013-01-01
This paper is a report of the impact of an externally funded, multiyear systemic reform project on students' science achievement on a modified version of the Third International Mathematics and Science Study (TIMSS) test in 33 small, rural school districts in two Midwest states. The systemic reform effort utilized a cascading leadership strategy…
Essays on Educational Achievement
ERIC Educational Resources Information Center
Ampaabeng, Samuel Kofi
2013-01-01
This dissertation examines the determinants of student outcomes--achievement, attainment, occupational choices and earnings--in three different contexts. The first two chapters focus on Ghana while the final chapter focuses on the US state of Massachusetts. In the first chapter, I exploit the incidence of famine and malnutrition that resulted to…
Increasing Male Academic Achievement
ERIC Educational Resources Information Center
Jackson, Barbara Talbert
2008-01-01
The No Child Left Behind legislation has brought greater attention to the academic performance of American youth. Its emphasis on student achievement requires a closer analysis of assessment data by school districts. To address the findings, educators must seek strategies to remedy failing results. In a mid-Atlantic district of the Unites States,…
Setting and Achieving Objectives.
ERIC Educational Resources Information Center
Knoop, Robert
1986-01-01
Provides basic guidelines which school officials and school boards may find helpful in negotiating, establishing, and managing objectives. Discusses characteristics of good objectives, specific and directional objectives, multiple objectives, participation in setting objectives, feedback on goal process and achievement, and managing a school…
Schools Achieving Gender Equity.
ERIC Educational Resources Information Center
Revis, Emma
This guide is designed to assist teachers presenting the Schools Achieving Gender Equity (SAGE) curriculum for vocational education students, which was developed to align gender equity concepts with the Kentucky Education Reform Act (KERA). Included in the guide are lesson plans for classes on the following topics: legal issues of gender equity,…
ERIC Educational Resources Information Center
Ohrn, Deborah Gore, Ed.
1993-01-01
This issue of the Goldfinch highlights some of Iowa's 20th century women of achievement. These women have devoted their lives to working for human rights, education, equality, and individual rights. They come from the worlds of politics, art, music, education, sports, business, entertainment, and social work. They represent Native Americans,…
ERIC Educational Resources Information Center
Goodwin, MacArthur
2000-01-01
Focuses on policy issues that have affected arts education in the twentieth century, such as: interest in discipline-based arts education, influence of national arts associations, and national standards and coordinated assessment. States that whether the policy decisions are viewed as achievements or disasters are for future determination. (CMK)
ERIC Educational Resources Information Center
Prince George's Community Coll., Largo, MD. Office of Institutional Research and Analysis.
This report summarizes the achievements of Prince George's Community College (PGCC) with regard to minority outcomes. Table 1 summarizes the undergraduate enrollment trends for African Americans as well as total minorities from fall 1994 through fall 1998. Both the headcount number of African American students and the proportion of African…
Appraising Reading Achievement.
ERIC Educational Resources Information Center
Ediger, Marlow
To determine quality sequence in pupil progress, evaluation approaches need to be used which guide the teacher to assist learners to attain optimally. Teachers must use a variety of procedures to appraise student achievement in reading, because no one approach is adequate. Appraisal approaches might include: (1) observation and subsequent…
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Algorithmic Mechanism Design of Evolutionary Computation
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Algorithmic Mechanism Design of Evolutionary Computation.
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
NASA Technical Reports Server (NTRS)
Fatemi, Emad; Jerome, Joseph; Osher, Stanley
1989-01-01
A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.
Tensor network algorithm by coarse-graining tensor renormalization on finite periodic lattices
NASA Astrophysics Data System (ADS)
Zhao, Hui-Hai; Xie, Zhi-Yuan; Xiang, Tao; Imada, Masatoshi
2016-03-01
We develop coarse-graining tensor renormalization group algorithms to compute physical properties of two-dimensional lattice models on finite periodic lattices. Two different coarse-graining strategies, one based on the tensor renormalization group and the other based on the higher-order tensor renormalization group, are introduced. In order to optimize the tensor network model globally, a sweeping scheme is proposed to account for the renormalization effect from the environment tensors under the framework of second renormalization group. We demonstrate the algorithms by the classical Ising model on the square lattice and the Kitaev model on the honeycomb lattice, and show that the finite-size algorithms achieve substantially more accurate results than the corresponding infinite-size ones.
1997-06-13
Project ACHIEVE was a math/science academic enhancement program aimed at first year high school Hispanic American students. Four high schools -- two in El Paso, Texas and two in Bakersfield, California -- participated in this Department of Energy-funded program during the spring and summer of 1996. Over 50 students, many of whom felt they were facing a nightmare future, were given the opportunity to work closely with personal computers and software, sophisticated calculators, and computer-based laboratories -- an experience which their regular academic curriculum did not provide. Math and science projects, exercises, and experiments were completed that emphasized independent and creative applications of scientific and mathematical theories to real world problems. The most important outcome was the exposure Project ACHIEVE provided to students concerning the college and technical-field career possibilities available to them.
Achieving Goal Blood Pressure.
Laurent, Stéphane
2015-07-01
Both monotherapy and combination therapy options are appropriate for antihypertensive therapy according to the 2013 European Society of Hypertension (ESH)/European Society of Cardiology (ESC) guidelines. Most patients require more than one agent to achieve blood pressure (BP) control, and adding a second agent is more effective than doubling the dose of existing therapy. The addition of a third agent may be required to achieve adequate BP reductions in some patients. Single-pill fixed-dose combinations (FDCs) allow multiple-drug regimens to be delivered without any negative impact on patient compliance or persistence with therapy. FDCs also have documented beneficial clinical effects and use of FDCs containing two or three agents is recommended by the 2013 ESH/ESC guidelines. PMID:26002423
The Relationship between Birth Order and Academic Achievement.
ERIC Educational Resources Information Center
Ogletree, Earl J.
1980-01-01
The author reviews some selected literature that reports higher academic achievement for first-borns and findings that large enough age gaps between siblings can offset first-born advantages in achievement. (Editor/SJL)
Improved autonomous star identification algorithm
NASA Astrophysics Data System (ADS)
Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong
2015-06-01
The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291
A new frame-based registration algorithm.
Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834
A new frame-based registration algorithm
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.
The annealing robust backpropagation (ARBP) learning algorithm.
Chuang, C C; Su, S F; Hsiao, C C
2000-01-01
Multilayer feedforward neural networks are often referred to as universal approximators. Nevertheless, if the used training data are corrupted by large noise, such as outliers, traditional backpropagation learning schemes may not always come up with acceptable performance. Even though various robust learning algorithms have been proposed in the literature, those approaches still suffer from the initialization problem. In those robust learning algorithms, the so-called M-estimator is employed. For the M-estimation type of learning algorithms, the loss function is used to play the role in discriminating against outliers from the majority by degrading the effects of those outliers in learning. However, the loss function used in those algorithms may not correctly discriminate against those outliers. In this paper, the annealing robust backpropagation learning algorithm (ARBP) that adopts the annealing concept into the robust learning algorithms is proposed to deal with the problem of modeling under the existence of outliers. The proposed algorithm has been employed in various examples. Those results all demonstrated the superiority over other robust learning algorithms independent of outliers. In the paper, not only is the annealing concept adopted into the robust learning algorithms but also the annealing schedule k/t was found experimentally to achieve the best performance among other annealing schedules, where k is a constant and is the epoch number. PMID:18249835
An Iterative Soft-Decision Decoding Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Koumoto, Takuya; Takata, Toyoo; Kasami, Tadao
1996-01-01
This paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. Simulation results for the RM(64,22), EBCH(64,24), RM(64,42) and EBCH(64,45) codes show that the proposed decoding algorithm achieves practically (or near) optimal error performance with significant reduction in decoding computational complexity. The average number of search iterations is also small even for low signal-to-noise ratio.
Barriers to Occupational Achievement.
ERIC Educational Resources Information Center
Gurman, Ernest B.
The under-representation of women in prestigious occupations and the lower average pay women earn has been of concern for many years. This study investigated two alternative explanations for this under-representation of females in prestigious and higher paying occupations. The first explanation was external barriers such as discrimination, and the…
HEATR project: ATR algorithm parallelization
NASA Astrophysics Data System (ADS)
Deardorf, Catherine E.
1998-09-01
High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.
Higher order Godunov schemes for isothermal hydrodynamics
NASA Technical Reports Server (NTRS)
Balsara, Dinshaw S.
1994-01-01
In this paper we construct higher order Godunov schemes for isothermal flow. Isothermal hydrodynamics serves as a good representation for several systems of astrophysical interest. The schemes designed here have second-order accuracy in space and time and some are third-order accurate for advection. Moreover, several ingredients of these schemes are essential components of even higher order. The methods designed here have excellent ability to represent smooth flow yet capture shocks with high resolution. Several test problems are presented. The algorithms presented here are compared with other algorithms having a comparable formal order of accuracy.
Emissivity spectra estimated with the MaxEnTES algorithm
NASA Astrophysics Data System (ADS)
Barducci, A.; Guzzi, D.; Lastri, C.; Nardino, V.; Pippi, I.; Raimondi, V.
2014-10-01
Temperature and Emissivity Separation (TES) applied to multispectral or hyperspectral Thermal Infrared (TIR) images of the Earth is a relevant issue for many remote sensing applications. The TIR spectral radiance can be modeled by means of the well-known Planck's law, as a function of the target temperature and emissivity. The estimation of these target's parameters (i.e. the Temperature Emissivity Separation, aka TES) is hindered by the circumstance that the number of measurements is less than the unknown number. Existing TES algorithms implement a temperature estimator in which the uncertainty is removed by adopting some a priori assumption that conditions the retrieved temperature and emissivity. Due to its mathematical structure, the Maximum Entropy formalism (MaxEnt) seems to be well suited for carrying out this complex TES operation. The main advantage of the MaxEnt statistical inference is the absence of any external hypothesis, which is instead characterizes most of the existing the TES algorithms. In this paper we describe the performance of the MaxEnTES (Maximum Entropy Temperature Emissivity Separation) algorithm as applied to ten TIR spectral channels of a MIVIS dataset collected over Italy. We compare the temperature and emissivity spectra estimated by this algorithm with independent estimations achieved with two previous TES methods (the Grey Body Emissivity (GBE), and the Model Emittance Calculation (MEC)). We show that MaxEnTES is a reliable algorithm in terms of its higher output Signal-to-Noise Ratio and the negligibility of systematic errors that bias the estimated temperature in other TES procedures.
Ellis, Beckie; Gates, Judy
2005-01-01
Magnet has become the gold standard for nursing excellence. It is the symbol of effective and safe patient care. It evaluates components that inspire safe care, including employee satisfaction and retention, professional education, and effective interdisciplinary collaboration. In an organization whose mission focuses on excellent patient care, Banner Thunderbird Medical Center found that pursuing Magnet status was clearly the next step. In this article, we will discuss committee selection, education, team building, planning, and the discovery process that define the Magnet journey. The road to obtaining Magnet status has permitted many opportunities to celebrate our achievements. PMID:16056158
Quantum algorithms for quantum field theories.
Jordan, Stephen P; Lee, Keith S M; Preskill, John
2012-06-01
Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm. PMID:22654052
Comparison of teacher-directed and student-directed journals on achievement in college chemistry
NASA Astrophysics Data System (ADS)
Anderson, Catherine Ann
The purpose of this research was to use written student journals as a means to improve academic achievement in a one semester introductory college chemistry course designed for nonscience majors. Two kinds of journals were investigated to compare their effectiveness in improving academic achievement and their ability to strengthen conceptual and algorithmic problem solving mastery. Student opinions toward journal writing in college chemistry were also compared. Each journal format was examined for possible differences between the genders. The journal types that were compared were a teacher-directed format and a student-directed format. In the teacher-directed group of 62 students responded to specific topics and readings assigned by the instructor. The 74 students in the student-directed group individually chose the topics that they wrote about. Students wrote two to four entries per week for the duration of the semester (15 weeks). Results showed no significant differences at the alpha=0.05 level between the journal types for improvement of academic achievement as measured by the ACS standardized General-Organic-Biological Chemistry Test. No significant differences in student ability to solve algorithmic and conceptual chemistry problems were detected. Additionally, there were no significant differences between the genders with respect to the journal type analyses. There was a significant difference in the students' opinions toward the journal format that they were assigned to use. The student-directed group gave significantly higher ratings to their format compared to the teacher-directed students. There were no significant differences in the opinion of males and females with regard to journal format. Based on the outcomes of this investigation, for instructors choosing to use written student journals in their college science classes, the use of student-directed journals is recommended. Although there is no benefit in terms of academic achievement or conceptual
Leadership, self-efficacy, and student achievement
NASA Astrophysics Data System (ADS)
Grayson, Kristin
This study examined the relationships between teacher leadership, science teacher self-efficacy, and fifth-grade science student achievement in diverse schools in a San Antonio, Texas, metropolitan school district. Teachers completed a modified version of the Leadership Behavior Description Question (LBDQ) Form XII by Stogdill (1969), the Science Efficacy and Belief Expectations for Science Teaching (SEBEST) by Ritter, Boone, and Rubba (2001, January). Students' scores on the Texas Assessment of Knowledge and Skills (TAKS) measured fifth-grade science achievement. At the teacher level of analysis multiple regressions showed the following relationships between teachers' science self-efficacy and teacher classroom leadership behaviors and the various teacher and school demographic variables. Predictors of teacher self efficacy beliefs included teacher's level of education, gender, and leadership initiating structure. The only significant predictor of teacher self-efficacy outcome expectancy was gender. Higher teacher self-efficacy beliefs predicted higher leadership initiating structure. At the school level of analysis, higher school levels of percentage of students from low socio-economic backgrounds and higher percentage of limited English proficient students predicted lower school student mean science achievement. These findings suggest a need for continued research to clarify relationships between teacher classroom leadership, science teacher self-efficacy, and student achievement especially at the teacher level of analysis. Findings also indicate the importance of developing instructional methods to address student demographics and their needs so that all students, despite their backgrounds, will achieve in science.
Aerocapture Guidance Algorithm Comparison Campaign
NASA Technical Reports Server (NTRS)
Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric
2002-01-01
The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.
Achieving ultra-high temperatures with a resistive emitter array
NASA Astrophysics Data System (ADS)
Danielson, Tom; Franks, Greg; Holmes, Nicholas; LaVeigne, Joe; Matis, Greg; McHugh, Steve; Norton, Dennis; Vengel, Tony; Lannon, John; Goodwin, Scott
2016-05-01
The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to also develop larger-format infrared emitter arrays to support the testing of systems incorporating these detectors. In addition to larger formats, many scene projector users require much higher simulated temperatures than can be generated with current technology in order to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024 x 1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1400 K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. A 'scalable' Read In Integrated Circuit (RIIC) is also being developed under the same UHT program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. Results of design verification testing of the completed RIIC will be presented and discussed.
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Improved Continuous-Time Higher Harmonic Control Using Hinfinity Methods
NASA Astrophysics Data System (ADS)
Fan, Frank H.
The helicopter is a versatile aircraft that can take-off and land vertically, hover efficiently, and maneuver in confined space. This versatility is enabled by the main rotor, which also causes undesired harmonic vibration during operation. This unwanted vibration has a negative impact on the practicality of the helicopter and also increases its operational cost. Passive control techniques have been applied to helicopter vibration suppression, but these methods are generally heavy and are not robust to changes in operating conditions. Feedback control offers the advantages of robustness and potentially higher performance over passive control techniques, and amongst the various feedback schemes, Shaw's higher harmonic control algorithm has been shown to be an effective method for attenuating harmonic disturbance in helicopters. In this thesis, the higher harmonic disturbance algorithm is further developed to achieve improved performance. One goal in this thesis is to determine the importance of periodicity in the helicopter rotor dynamics for control synthesis. Based on the analysis of wind tunnel data and simulation results, we conclude the helicopter rotor can be modeled reasonably well as linear and time-invariant for control design purposes. Modeling the helicopter rotor as linear time-invariant allows us to apply linear control theory concepts to the higher harmonic control problem. Another goal in this thesis is to find the limits of performance in harmonic disturbance rejection. To achieve this goal, we first define the metrics to measure the performance of the controller in terms of response speed and robustness to changes in the plant dynamics. The performance metrics are incorporated into an Hinfinity control problem. For a given plant, the resulting Hinfinity controller achieves the maximum performance, thus allowing us to identify the performance limitation in harmonic disturbance rejection. However, the Hinfinity controllers are of high order, and may
Correlation of Wissler Human Thermal Model Blood Flow and Shiver Algorithms
NASA Technical Reports Server (NTRS)
Bue, Grant; Makinen, Janice; Cognata, Thomas
2010-01-01
The Wissler Human Thermal Model (WHTM) is a thermal math model of the human body that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. The model has been shown to predict core temperature and skin temperatures higher and lower, respectively, than in tests of subjects in crew escape suit working in a controlled hot environments. Conversely the model predicts core temperature and skin temperatures lower and higher, respectively, than in tests of lightly clad subjects immersed in cold water conditions. The blood flow algorithms of the model has been investigated to allow for more and less flow, respectively, for the cold and hot case. These changes in the model have yielded better correlation of skin and core temperatures in the cold and hot cases. The algorithm for onset of shiver did not need to be modified to achieve good agreement in cold immersion simulations
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Recognizing outstanding achievements
NASA Astrophysics Data System (ADS)
Speiss, Fred
One function of any professional society is to provide an objective, informed means for recognizing outstanding achievements in its field. In AGU's Ocean Sciences section we have a variety of means for carrying out this duty. They include recognition of outstanding student presentations at our meetings, dedication of special sessions, nomination of individuals to be fellows of the Union, invitations to present Sverdrup lectures, and recommendations for Macelwane Medals, the Ocean Sciences Award, and the Ewing Medal.Since the decision to bestow these awards requires initiative and judgement by members of our section in addition to a deserving individual, it seems appropriate to review the selection process for each and to urge you to identify those deserving of recognition.
Bradburne, John; Patton, Tisha C.
2001-02-25
When Fluor Fernald took over the management of the Fernald Environmental Management Project in 1992, the estimated closure date of the site was more than 25 years into the future. Fluor Fernald, in conjunction with DOE-Fernald, introduced the Accelerated Cleanup Plan, which was designed to substantially shorten that schedule and save taxpayers more than $3 billion. The management of Fluor Fernald believes there are three fundamental concerns that must be addressed by any contractor hoping to achieve closure of a site within the DOE complex. They are relationship management, resource management and contract management. Relationship management refers to the interaction between the site and local residents, regulators, union leadership, the workforce at large, the media, and any other interested stakeholder groups. Resource management is of course related to the effective administration of the site knowledge base and the skills of the workforce, the attraction and retention of qualified a nd competent technical personnel, and the best recognition and use of appropriate new technologies. Perhaps most importantly, resource management must also include a plan for survival in a flat-funding environment. Lastly, creative and disciplined contract management will be essential to effecting the closure of any DOE site. Fluor Fernald, together with DOE-Fernald, is breaking new ground in the closure arena, and ''business as usual'' has become a thing of the past. How Fluor Fernald has managed its work at the site over the last eight years, and how it will manage the new site closure contract in the future, will be an integral part of achieving successful closure at Fernald.
Innovations in Lattice QCD Algorithms
Konstantinos Orginos
2006-06-25
Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
Plimpton, Steven J.; Hendrickson, Bruce; Burns, Shawn P.; McLendon, William III; Rauchwerger, Lawrence
2005-07-15
The method of discrete ordinates is commonly used to solve the Boltzmann transport equation. The solution in each ordinate direction is most efficiently computed by sweeping the radiation flux across the computational grid. For unstructured grids this poses many challenges, particularly when implemented on distributed-memory parallel machines where the grid geometry is spread across processors. We present several algorithms relevant to this approach: (a) an asynchronous message-passing algorithm that performs sweeps simultaneously in multiple ordinate directions, (b) a simple geometric heuristic to prioritize the computational tasks that a processor works on, (c) a partitioning algorithm that creates columnar-style decompositions for unstructured grids, and (d) an algorithm for detecting and eliminating cycles that sometimes exist in unstructured grids and can prevent sweeps from successfully completing. Algorithms (a) and (d) are fully parallel; algorithms (b) and (c) can be used in conjunction with (a) to achieve higher parallel efficiencies. We describe our message-passing implementations of these algorithms within a radiation transport package. Performance and scalability results are given for unstructured grids with up to 3 million elements (500 million unknowns) running on thousands of processors of Sandia National Laboratories' Intel Tflops machine and DEC-Alpha CPlant cluster.
Algorithm and program for information processing with the filin apparatus
NASA Technical Reports Server (NTRS)
Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.
1979-01-01
The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.
Achievement Goals and Achievement Emotions: A Meta-Analysis
ERIC Educational Resources Information Center
Huang, Chiungjung
2011-01-01
This meta-analysis synthesized 93 independent samples (N = 30,003) in 77 studies that reported in 78 articles examining correlations between achievement goals and achievement emotions. Achievement goals were meaningfully associated with different achievement emotions. The correlations of mastery and mastery approach goals with positive achievement…
Clutter discrimination algorithm simulation in pulse laser radar imaging
NASA Astrophysics Data System (ADS)
Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule
2015-10-01
Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.
Generalized Higher Degree Total Variation (HDTV) Regularization
Hu, Yue; Ongie, Greg; Ramani, Sathish; Jacob, Mathews
2015-01-01
We introduce a family of novel image regularization penalties called generalized higher degree total variation (HDTV). These penalties further extend our previously introduced HDTV penalties, which generalize the popular total variation (TV) penalty to incorporate higher degree image derivatives. We show that many of the proposed second degree extensions of TV are special cases or are closely approximated by a generalized HDTV penalty. Additionally, we propose a novel fast alternating minimization algorithm for solving image recovery problems with HDTV and generalized HDTV regularization. The new algorithm enjoys a ten-fold speed up compared to the iteratively reweighted majorize minimize algorithm proposed in a previous work. Numerical experiments on 3D magnetic resonance images and 3D microscopy images show that HDTV and generalized HDTV improve the image quality significantly compared with TV. PMID:24710832
Algorithm of chest wall keloid treatment.
Long, Xiao; Zhang, Mingzi; Wang, Yang; Zhao, Ru; Wang, Youbin; Wang, Xiaojun
2016-08-01
Keloids are common in the Asian population. Multiple or huge keloids can appear on the chest wall because of its tendency to develop acne, sebaceous cyst, etc. It is difficult to find an ideal treatment for keloids in this area due to the limit of local soft tissues and higher recurrence rate. This study aims at establishing an individualized protocol that could be easily applied according to the size and number of chest wall keloids.A total of 445 patients received various methods (4 protocols) of treatment in our department from September 2006 to September 2012 according to the size and number of their chest wall keloids. All of the patients received adjuvant radiotherapy in our hospital. Patient and Observer Scar Assessment Scale (POSAS) was used to assess the treatment effect by both doctors and patients. With mean follow-up time of 13 months (range: 6-18 months), 362 patients participated in the assessment of POSAS with doctors.Both the doctors and the patients themselves used POSAS to evaluate the treatment effect. The recurrence rate was 0.83%. There was an obvious significant difference (P < 0.001) between the before-surgery score and the after-surgery score from both doctors and patients, indicating that both doctors and patients were satisfied with the treatment effect.Our preliminary clinical result indicates that good clinical results could be achieved by choosing the proper method in this algorithm for Chinese patients with chest wall keloids. This algorithm could play a guiding role for surgeons when dealing with chest wall keloid treatment. PMID:27583896
Algorithm of chest wall keloid treatment
Long, Xiao; Zhang, Mingzi; Wang, Yang; Zhao, Ru; Wang, Youbin; Wang, Xiaojun
2016-01-01
Abstract Keloids are common in the Asian population. Multiple or huge keloids can appear on the chest wall because of its tendency to develop acne, sebaceous cyst, etc. It is difficult to find an ideal treatment for keloids in this area due to the limit of local soft tissues and higher recurrence rate. This study aims at establishing an individualized protocol that could be easily applied according to the size and number of chest wall keloids. A total of 445 patients received various methods (4 protocols) of treatment in our department from September 2006 to September 2012 according to the size and number of their chest wall keloids. All of the patients received adjuvant radiotherapy in our hospital. Patient and Observer Scar Assessment Scale (POSAS) was used to assess the treatment effect by both doctors and patients. With mean follow-up time of 13 months (range: 6–18 months), 362 patients participated in the assessment of POSAS with doctors. Both the doctors and the patients themselves used POSAS to evaluate the treatment effect. The recurrence rate was 0.83%. There was an obvious significant difference (P < 0.001) between the before-surgery score and the after-surgery score from both doctors and patients, indicating that both doctors and patients were satisfied with the treatment effect. Our preliminary clinical result indicates that good clinical results could be achieved by choosing the proper method in this algorithm for Chinese patients with chest wall keloids. This algorithm could play a guiding role for surgeons when dealing with chest wall keloid treatment. PMID:27583896
Fast, single-molecule localization that achieves theoretically minimum uncertainty.
Smith, Carlas S; Joseph, Nikolai; Rieger, Bernd; Lidke, Keith A
2010-05-01
We describe an iterative algorithm that converges to the maximum likelihood estimate of the position and intensity of a single fluorophore. Our technique efficiently computes and achieves the Cramér-Rao lower bound, an essential tool for parameter estimation. An implementation of the algorithm on graphics processing unit hardware achieved more than 10(5) combined fits and Cramér-Rao lower bound calculations per second, enabling real-time data analysis for super-resolution imaging and other applications. PMID:20364146
An SMP soft classification algorithm for remote sensing
NASA Astrophysics Data System (ADS)
Phillips, Rhonda D.; Watson, Layne T.; Easterling, David R.; Wynne, Randolph H.
2014-07-01
This work introduces a symmetric multiprocessing (SMP) version of the continuous iterative guided spectral class rejection (CIGSCR) algorithm, a semiautomated classification algorithm for remote sensing (multispectral) images. The algorithm uses soft data clusters to produce a soft classification containing inherently more information than a comparable hard classification at an increased computational cost. Previous work suggests that similar algorithms achieve good parallel scalability, motivating the parallel algorithm development work here. Experimental results of applying parallel CIGSCR to an image with approximately 108 pixels and six bands demonstrate superlinear speedup. A soft two class classification is generated in just over 4 min using 32 processors.
Birefringent filter design by use of a modified genetic algorithm.
Wen, Mengtao; Yao, Jianping
2006-06-10
A modified genetic algorithm is proposed for the optimization of fiber birefringent filters. The orientation angles and the element lengths are determined by the genetic algorithm to minimize the sidelobe levels of the filters. Being different from the normal genetic algorithm, the algorithm proposed reduces the problem space of the birefringent filter design to achieve faster speed and better performance. The design of 4-, 8-, and 14-section birefringent filters with an improved sidelobe suppression ratio is realized. A 4-section birefringent filter designed with the algorithm is experimentally realized. PMID:16761031
Minimalist ensemble algorithms for genome-wide protein localization prediction
2012-01-01
Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. Conclusions We
The theory of hybrid stochastic algorithms
Kennedy, A.D. . Supercomputer Computations Research Inst.)
1989-11-21
These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs.
Entrepreneur achievement. Liaoning province.
Zhao, R
1994-03-01
This paper reports the successful entrepreneurial endeavors of members of a 20-person women's group in Liaoning Province, China. Jing Yuhong, a member of the Family Planning Association at Shileizi Village, Dalian City, provided the basis for their achievements by first building an entertainment/study room in her home to encourage married women to learn family planning. Once stocked with books, magazines, pamphlets, and other materials on family planning and agricultural technology, dozens of married women in the neighborhood flocked voluntarily to the room. Yuhong also set out to give these women a way to earn their own income as a means of helping then gain greater equality with their husbands and exert greater control over their personal reproductive and social lives. She gave a section of her farming land to the women's group, loaned approximately US$5200 to group members to help them generate income from small business initiatives, built a livestock shed in her garden for the group to raise marmots, and erected an awning behind her house under which mushrooms could be grown. The investment yielded $12,000 in the first year, allowing each woman to keep more than $520 in dividends. Members then soon began going to fairs in the capital and other places to learn about the outside world, and have successfully ventured out on their own to generate individual incomes. Ten out of twenty women engaged in these income-generating activities asked for and got the one-child certificate. PMID:12287775
Five-dimensional Janis-Newman algorithm
NASA Astrophysics Data System (ADS)
Erbin, Harold; Heurtier, Lucien
2015-08-01
The Janis-Newman algorithm has been shown to be successful in finding new stationary solutions of four-dimensional gravity. Attempts for a generalization to higher dimensions have already been found for the restricted cases with only one angular momentum. In this paper we propose an extension of this algorithm to five-dimensions with two angular momenta—using the prescription of Giampieri—through two specific examples, that are the Myers-Perry and BMPV black holes. We also discuss possible enlargements of our prescriptions to other dimensions and maximal number of angular momenta, and show how dimensions higher than six appear to be much more challenging to treat within this framework. Nonetheless this general algorithm provides a unification of the formulation in d=3,4,5 of the Janis-Newman algorithm, from which several examples are exposed, including the BTZ black hole.
Handbook of Research on Improving Student Achievement.
ERIC Educational Resources Information Center
Cawelti, Gordon, Ed.
This handbook is designed to identify classroom practices that research has shown to result in higher student achievement. The fundamental idea behind this book is that in order to succeed, efforts to improve instruction must foucs on the existing knowledge base about effective teaching and learning. The chapters are: (1) "Introduction" (Gordon…
Achievement in Boys' Schools 2010-12
ERIC Educational Resources Information Center
Wylie, Cathy; Berg, Melanie
2014-01-01
This report explores the achievement of school leavers from state and state-integrated boys' schools. The analysis from 2010 to 2012 shows school leavers from state boys' schools had higher qualifications than their male counterparts who attended state co-educational schools. The research was carried out for the Association of Boys' Schools of New…
Academic Freedom, Achievement Standards and Professional Identity
ERIC Educational Resources Information Center
Sadler, D. Royce
2011-01-01
The tension between the freedom of academics to grade the achievements of their students without interference or coercion and the prerogative of higher education institutions to control grading standards is often deliberated by weighing up the authority and rights of the two parties. An alternative approach is to start with an analysis of the…
The Achiever. Volume 6, Number 4
ERIC Educational Resources Information Center
Ashby, Nicole, Ed.
2007-01-01
"The Achiever" is a monthly publication designed expressly for parents and community leaders. Each issue contains news and information about school improvement in the United States. Highlights of this issue include: (1) Spellings Convenes National Summit on Higher Education; (2) Noble Street: Chicago Charter High School Creates a Culture of …
Is achievement in Australian chemistry gender based?
NASA Astrophysics Data System (ADS)
Beard, John; Fogliani, Charles; Owens, Chris; Wilson, Audrey
1993-12-01
This paper compares the performances of female and male secondary students in the 1991 and 1992 Australian National Chemistry Quizzes. Male students consistently achieved a higher mean score in all Year groups (7 to 12), even though the numbers of female and male entrants were approximately equal. Implications for class tests and assessment tasks are addressed.
Communication Studies in Australia: Achievements and Prospects.
ERIC Educational Resources Information Center
Irwin, Harry
The introduction of communications studies in Australian higher education and problems and achievements of the past decade are discussed. Attention is directed to: the development of formal college coursework; staff training and retraining schemes to support development; academic and professional associations; journals in the field; and research,…
A Human Achievement: Mathematics without Boundaries.
ERIC Educational Resources Information Center
Terzioglu, Tosun
This paper describes three fundamental principles, dictated by Wilhelm von Humboldt, that were widely adapted as the basic philosophy of higher education in the United States, and proposes to revise the unfulfilled dream of von Humboldt to make it come true. This paper stresses the achievements of humanity not only in technology, health, or the…
Improving Student Achievement through Alternative Assessments.
ERIC Educational Resources Information Center
Durning, Jermaine; Matyasec, Maryann
An attempt was made to improve students' academic grades and students' opinions of themselves as learners through the use of alternative assessments. The format of mastery learning using the direct instruction practice model was combined with performance-based assessment to increase achievement, self-esteem, and higher level thinking skills.…
The Economic Value of Higher Teacher Quality
ERIC Educational Resources Information Center
Hanushek, Eric A.
2011-01-01
Most analyses of teacher quality end without any assessment of the economic value of altered teacher quality. This paper combines information about teacher effectiveness with the economic impact of higher achievement. It begins with an overview of what is known about the relationship between teacher quality and student achievement. This provides…
The Homogeneity of School Achievement.
ERIC Educational Resources Information Center
Cahan, Sorel
Since the measurement of school achievement involves the administration of achievement tests to various grades on various subjects, both grade level and subject matter contribute to within-school achievement variations. To determine whether achievement test scores vary most among different fields within a grade level, or within fields among…
Higher Education Exchange, 2014
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2014-01-01
Research shows that not only does higher education not see the public; when the public, in turn, looks at higher education, it sees mostly malaise, inefficiencies, expense, and unfulfilled promises. Yet, the contributors to this issue of the "Higher Education Exchange" tell of bright spots in higher education where experiments in working…
Higher Education Exchange, 2008
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2008-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Higher Education Exchange, 2010
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2010-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Higher Education Exchange, 2011
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2011-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Higher Education Exchange, 2012
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2012-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Facial Composite System Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Zahradníková, Barbora; Duchovičová, Soňa; Schreiber, Peter
2014-12-01
The article deals with genetic algorithms and their application in face identification. The purpose of the research is to develop a free and open-source facial composite system using evolutionary algorithms, primarily processes of selection and breeding. The initial testing proved higher quality of the final composites and massive reduction in the composites processing time. System requirements were specified and future research orientation was proposed in order to improve the results.
HEPEX - achievements and challenges!
NASA Astrophysics Data System (ADS)
Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan
2014-05-01
HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.
The delay multiply and sum beamforming algorithm in ultrasound B-mode medical imaging.
Matrone, Giulia; Savoia, Alessandro Stuart; Caliano, Giosue; Magenes, Giovanni
2015-04-01
Most of ultrasound medical imaging systems currently on the market implement standard Delay and Sum (DAS) beamforming to form B-mode images. However, image resolution and contrast achievable with DAS are limited by the aperture size and by the operating frequency. For this reason, different beamformers have been presented in the literature that are mainly based on adaptive algorithms, which allow achieving higher performance at the cost of an increased computational complexity. In this paper, we propose the use of an alternative nonlinear beamforming algorithm for medical ultrasound imaging, which is called Delay Multiply and Sum (DMAS) and that was originally conceived for a RADAR microwave system for breast cancer detection. We modify the DMAS beamformer and test its performance on both simulated and experimentally collected linear-scan data, by comparing the Point Spread Functions, beampatterns, synthetic phantom and in vivo carotid artery images obtained with standard DAS and with the proposed algorithm. Results show that the DMAS beamformer outperforms DAS in both simulated and experimental trials and that the main improvement brought about by this new method is a significantly higher contrast resolution (i.e., narrower main lobe and lower side lobes), which turns out into an increased dynamic range and better quality of B-mode images. PMID:25420256
Semioptimal practicable algorithmic cooling
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-15
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
NASA Astrophysics Data System (ADS)
Cosofret, Bogdan R.; Shokhirev, Kirill; Mulhall, Phil; Payne, David; Harris, Bernard
2014-05-01
Technology development efforts seek to increase the capability of detection systems in low Signal-to-Noise regimes encountered in both portal and urban detection applications. We have recently demonstrated significant performance enhancement in existing Advanced Spectroscopic Portals (ASP), Standoff Radiation Detection Systems (SORDS) and handheld isotope identifiers through the use of new advanced detection and identification algorithms. The Poisson Clutter Split (PCS) algorithm is a novel approach for radiological background estimation that improves the detection and discrimination capability of medium resolution detectors. The algorithm processes energy spectra and performs clutter suppression, yielding de-noised gamma-ray spectra that enable significant enhancements in detection and identification of low activity threats with spectral target recognition algorithms. The performance is achievable at the short integration times (0.5 - 1 second) necessary for operation in a high throughput and dynamic environment. PCS has been integrated with ASP, SORDS and RIID units and evaluated in field trials. We present a quantitative analysis of algorithm performance against data collected by a range of systems in several cluttered environments (urban and containerized) with embedded check sources. We show that the algorithm achieves a high probability of detection/identification with low false alarm rates under low SNR regimes. For example, utilizing only 4 out of 12 NaI detectors currently available within an ASP unit, PCS processing demonstrated Pd,ID > 90% at a CFAR (Constant False Alarm Rate) of 1 in 1000 occupancies against weak activity (7 - 8μCi) and shielded sources traveling through the portal at 30 mph. This vehicle speed is a factor of 6 higher than was previously possible and results in significant increase in system throughput and overall performance.
[The correlation based mid-infrared temperature and emissivity separation algorithm].
Cheng, Jie; Nie, Ai-Xiu; Du, Yong-Ming
2009-02-01
Temperature and emissivity separation is the key problem in infrared remote sensing. Based on the analysis of the relationship between the atmospheric downward radiance and surface emissivity containing atmosphere residue without the effects of sun irradiation, the present paper puts forward a temperature and emissivity separation algorithm for the ground-based mid-infrared hyperspectral data. The algorithm uses the correlation between the atmospheric downward radiance and surface emissivity containing atmosphere residue as a criterion to optimize the surface temperature, and the correlation between the atmospheric downward radiance and surface emissivity containing atmosphere residue depends on the bias between the estimated surface temperature and true surface temperature. The larger the temperature bias, the greater the correlation. Once we have obtained the surface temperature, the surface emissivity can be calculated easily. The accuracy of the algorithm was evaluated with the simulated mid-infrared hyperspectral data. The results of simulated calculation show that the algorithm can achieve higher accuracy of temperature and emissivity inversion, and also has broad applicability. Meanwhile, the algorithm is insensitive to the instrumental random noise and the change in atmospheric downward radiance during the field measurements. PMID:19445199
PGC demodulating scheme based on CORDIC algorithm for interferometric optical fiber sensor
NASA Astrophysics Data System (ADS)
Jing, Zhenguo; Zhang, Min; Wang, Liwei; Yin, Kai; Liao, Yanbiao
2007-11-01
One important advantage of interferometric optical fiber sensor is high sensitivity. The development of the interferometric optical fiber sensor is partly restricted with the demodulating technique. Because of advantages such as high sensitivity, high dynamic range, and good linearity, PGC (Phase Generated Carrier) demodulating scheme is widely applied for interferometric optical fiber sensor now. In this paper, an arctangent approach of the PGC demodulating scheme is introduced. CORDIC (Coordinate Rotation Digital Computer) algorithm is used to realize the arctangent function. CORDIC algorithm is a method for computing elementary functions using minimal hardware such as shifts, adds/subs and compares. CORDIC algorithm works by rotating the coordinate system through constant angles until the angle is reduces to zero. The angle offsets are selected such that the operations on X and Y are only shifts and adds. This method will lead in less complexity and higher accuracy. Since digital signal processing technology has achieved great development, especially the appearances of high speed processors such as FPGA and DSP, PGC demodulating scheme based on CORDIC algorithm is implemented conveniently. The experiments are carried out to verify the PGC demodulating scheme based on CORDIC algorithm.
A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection
Chen, Yaw-Chung
2015-01-01
The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms. PMID:26437335
A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection.
Lee, Chun-Liang; Lin, Yi-Shan; Chen, Yaw-Chung
2015-01-01
The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms. PMID:26437335
Doppler-based motion compensation algorithm for focusing the signature of a rotorcraft.
Goldman, Geoffrey H
2013-02-01
A computationally efficient algorithm was developed and tested to compensate for the effects of motion on the acoustic signature of a rotorcraft. For target signatures with large spectral peaks that vary slowly in amplitude and have near constant frequency, the time-varying Doppler shift can be tracked and then removed from the data. The algorithm can be used to preprocess data for classification, tracking, and nulling algorithms. The algorithm was tested on rotorcraft data. The average instantaneous frequency of the first harmonic of a rotorcraft was tracked with a fixed-lag smoother. Then, state space estimates of the frequency were used to calculate a time warping that removed the effect of a time-varying Doppler shift from the data. The algorithm was evaluated by analyzing the increase in the amplitude of the harmonics in the spectrum of a rotorcraft. The results depended upon the frequency of the harmonics and the processing interval duration. Under good conditions, the results for the fundamental frequency of the target (~11 Hz) almost achieved an estimated upper bound. The results for higher frequency harmonics had larger increases in the amplitude of the peaks, but significantly lower than the estimated upper bounds. PMID:23363088
Kim, Byung S; Yoo, Sun K
2007-09-01
The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824
Peak detection in fiber Bragg grating using a fast phase correlation algorithm
NASA Astrophysics Data System (ADS)
Lamberti, A.; Vanlanduit, S.; De Pauw, B.; Berghmans, F.
2014-05-01
Fiber Bragg grating sensing principle is based on the exact tracking of the peak wavelength location. Several peak detection techniques have already been proposed in literature. Among these, conventional peak detection (CPD) methods such as the maximum detection algorithm (MDA), do not achieve very high precision and accuracy, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. On the other hand, recently proposed algorithms, like the cross-correlation demodulation algorithm (CCA), are more precise and accurate but require higher computational effort. To overcome these limitations, we developed a novel fast phase correlation algorithm (FPC) which performs as well as the CCA, being at the same time considerably faster. This paper presents the FPC technique and analyzes its performances for different SNR and wavelength resolutions. Using simulations and experiments, we compared the FPC with the MDA and CCA algorithms. The FPC detection capabilities were as precise and accurate as those of the CCA and considerably better than those of the CPD. The FPC computational time was up to 50 times lower than CCA, making the FPC a valid candidate for future implementation in real-time systems.
iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells
He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin
2015-01-01
Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture. PMID:26168908
Efficient implementation of Jacobi algorithms and Jacobi sets on distributed memory architectures
NASA Astrophysics Data System (ADS)
Eberlein, P. J.; Park, Haesun
1990-04-01
One-sided methods for implementing Jacobi diagonalization algorithms have been recently proposed for both distributed memory and vector machines. These methods are naturally well suited to distributed memory and vector architectures because of their inherent parallelism and their abundance of vector operations. Also, one-sided methods require substantially less message passing than the two-sided methods, and thus can achieve higher efficiency. We describe in detail the use of the one-sided Jacobi rotation as opposed to the rotation used in the ``Hestenes'' algorithm; we perceive this difference to have been widely misunderstood. Furthermore the one-sided algorithm generalizes to other problems such as the nonsymmetric eigenvalue problem while the Hestenes algorithm does not. We discuss two new implementations for Jacobi sets for a ring connected array of processors and show their isomorphism to the round-robin ordering. Moreover, we show that two implementations produce Jacobi sets in identical orders up to a relabeling. These orderings are optimal in the sense that they complete each sweep in a minimum number of stages with minimal communication. We present implementation results of one-sided Jacobi algorithms using these orderings on the NCUBE/seven hypercube as well as the Intel iPSC/2 hypercube. Finally, we mention how other orderings, and can be, implemented. The number of nonisomorphic Jacobi sets has recently been shown to become infinite with increasing n. The work of this author was supported by National Science Foundation Grant CCR-8813493.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136
Algorithm for Autonomous Landing
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2011-01-01
Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.
Berry, K.; Dayton, S.
1996-10-28
Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.
Evaluation of TCP congestion control algorithms.
Long, Robert Michael
2003-12-01
Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.
Student academic achievement in college chemistry
NASA Astrophysics Data System (ADS)
Tabibzadeh, Kiana S.
General Chemistry is required for variety of baccalaureate degrees, including all medical related fields, engineering, and science majors. Depending on the institution, the prerequisite requirement for college level General Chemistry varies. The success rate for this course is low. The purpose of this study is to examine the factors influencing student academic achievement and retention in General Chemistry at the college level. In this study student achievement is defined by those students who earned grades of "C" or better. The dissertation contains in-depth studies on influence of Intermediate Algebra as a prerequisite compared to Fundamental Chemistry for student academic achievement and student retention in college General Chemistry. In addition the study examined the extent and manner in which student self-efficacy influences student academic achievement in college level General Chemistry. The sample for this part of the study is 144 students enrolled in first semester college level General Chemistry. Student surveys determined student self-efficacy level. The statistical analyses of study demonstrated that Fundamental Chemistry is a better prerequisite for student academic achievement and student retention. The study also found that student self-efficacy has no influence on student academic achievement. The significance of this study will be to provide data for the purpose of establishing a uniform and most suitable prerequisite for college level General Chemistry. Finally the variables identified to influence student academic achievement and enhance student retention will support educators' mission to maximize the students' ability to complete their educational goal at institutions of higher education.
Advanced Imaging Algorithms for Radiation Imaging Systems
Marleau, Peter
2015-10-01
The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.
Swarm-based algorithm for phase unwrapping.
da Silva Maciel, Lucas; Albertazzi, Armando G
2014-08-20
A novel algorithm for phase unwrapping based on swarm intelligence is proposed. The algorithm was designed based on three main goals: maximum coverage of reliable information, focused effort for better efficiency, and reliable unwrapping. Experiments were performed, and a new agent was designed to follow a simple set of five rules in order to collectively achieve these goals. These rules consist of random walking for unwrapping and searching, ambiguity evaluation by comparing unwrapped regions, and a replication behavior responsible for the good distribution of agents throughout the image. The results were comparable with the results from established methods. The swarm-based algorithm was able to suppress ambiguities better than the flood-fill algorithm without relying on lengthy processing times. In addition, future developments such as parallel processing and better-quality evaluation present great potential for the proposed method. PMID:25321125
SOM-based algorithms for qualitative variables.
Cottrell, Marie; Ibbou, Smaïl; Letrémy, Patrick
2004-01-01
It is well known that the SOM algorithm achieves a clustering of data which can be interpreted as an extension of Principal Component Analysis, because of its topology-preserving property. But the SOM algorithm can only process real-valued data. In previous papers, we have proposed several methods based on the SOM algorithm to analyze categorical data, which is the case in survey data. In this paper, we present these methods in a unified manner. The first one (Kohonen Multiple Correspondence Analysis, KMCA) deals only with the modalities, while the two others (Kohonen Multiple Correspondence Analysis with individuals, KMCA_ind, Kohonen algorithm on DISJonctive table, KDISJ) can take into account the individuals, and the modalities simultaneously. PMID:15555858
Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.
Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding
2016-01-01
The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428
Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm
Yang, Zhang; Li, Guo; Weifeng, Ding
2016-01-01
The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Novel and efficient tag SNPs selection algorithms.
Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2014-01-01
SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels. PMID:24212035
Higher Education Planning. A Bibliographic Handbook.
ERIC Educational Resources Information Center
Halstead, D. Kent, Ed.
The first edition of a bibliography focusing on state and national level planning in higher education is presented. For purposes of this publication, planning is defined as a process of study and foresight that generates action to achieve desired outcomes in the higher education sector. The bibliography is organized into topic areas that include:…
The Centrality of Engagement in Higher Education
ERIC Educational Resources Information Center
Fitzgerald, Hiram E.; Bruns, Karen; Sonka, Steven T.; Furco, Andrew; Swanson, Louis
2012-01-01
The centrality of engagement is critical to the success of higher education in the future. Engagement is essential to most effectively achieving the overall purpose of the university, which is focused on the knowledge enterprise. Today's engagement is scholarly, is an aspect of learning and discovery, and enhances society and higher education.…
The Centrality of Engagement in Higher Education
ERIC Educational Resources Information Center
Fitzgerald, Hiram E.; Bruns, Karen; Sonka, Steven T.; Furco, Andrew; Swanson, Louis
2016-01-01
The centrality of engagement is critical to the success of higher education in the future. Engagement is essential to most effectively achieving the overall purpose of the university, which is focused on the knowledge enterprise. Today's engagement is scholarly, is an aspect of learning and discovery, and enhances society and higher education.…
Higher Education and the State in Cuba.
ERIC Educational Resources Information Center
Paulston, Rolland G.
How and why the expansion and reorientation in Cuban higher education has taken place is noted, and continuing problems and emerging trends are assessed. Few developing countries can match Cuban achievements in higher education, which has advanced to levels characteristic of developed societies. Ideological orientations of historical trends are…
The Impact of Reading Achievement on Overall Academic Achievement
ERIC Educational Resources Information Center
Churchwell, Dawn Earheart
2009-01-01
This study examined the relationship between reading achievement and achievement in other subject areas. The purpose of this study was to determine if there was a correlation between reading scores as measured by the Standardized Test for the Assessment of Reading (STAR) and academic achievement in language arts, math, science, and social studies…
Attitude Towards Physics and Additional Mathematics Achievement Towards Physics Achievement
ERIC Educational Resources Information Center
Veloo, Arsaythamby; Nor, Rahimah; Khalid, Rozalina
2015-01-01
The purpose of this research is to identify the difference in students' attitude towards Physics and Additional Mathematics achievement based on gender and relationship between attitudinal variables towards Physics and Additional Mathematics achievement with achievement in Physics. This research focused on six variables, which is attitude towards…
Predicting Mathematics Achievement: The Influence of Prior Achievement and Attitudes
ERIC Educational Resources Information Center
Hemmings, Brian; Grootenboer, Peter; Kay, Russell
2011-01-01
Achievement in mathematics is inextricably linked to future career opportunities, and therefore, understanding those factors that influence achievement is important. This study sought to examine the relationships among attitude towards mathematics, ability and mathematical achievement. This examination was also supported by a focus on gender…
A Palmprint Recognition Algorithm Using Phase-Only Correlation
NASA Astrophysics Data System (ADS)
Ito, Koichi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo
This paper presents a palmprint recognition algorithm using Phase-Only Correlation (POC). The use of phase components in 2D (two-dimensional) discrete Fourier transforms of palmprint images makes it possible to achieve highly robust image registration and matching. In the proposed algorithm, POC is used to align scaling, rotation and translation between two palmprint images, and evaluate similarity between them. Experimental evaluation using a palmprint image database clearly demonstrates efficient matching performance of the proposed algorithm.
Research on secure routing algorithm in wireless sensor network
NASA Astrophysics Data System (ADS)
Zhang, Bo
2013-03-01
Through the research on the existing wireless sensor network (WSN) and its security technologies, this paper presents a design of the WSN-based secure routing algorithm. This design uses the existing routing algorithm as chief source, adding the security guidance strategy, introducing the location key information, to enhance the security performance of WSN routing. The improved routing algorithm makes the WSN routing achieve better anti-attack in the case of little overhead increase, therefore has high practical value.
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Building Higher-Order Markov Chain Models with EXCEL
ERIC Educational Resources Information Center
Ching, Wai-Ki; Fung, Eric S.; Ng, Michael K.
2004-01-01
Categorical data sequences occur in many applications such as forecasting, data mining and bioinformatics. In this note, we present higher-order Markov chain models for modelling categorical data sequences with an efficient algorithm for solving the model parameters. The algorithm can be implemented easily in a Microsoft EXCEL worksheet. We give a…
Network Algorithms for Detection of Radiation Sources
Rao, Nageswara S; Brooks, Richard R; Wu, Qishi
2014-01-01
estimate, typically specified as a multiplier of the background radiation level. A judicious selection of this source multiplier is essential to achieve optimal detection probability at a specified false alarm rate. Typically, this threshold is chosen from the Receiver Operating Characteristic (ROC) by varying the source multiplier estimate. ROC is expected to have a monotonically increasing profile between the detection probability and false alarm rate. We derived ROCs for multiple indoor tests using KMB datasets, which revealed an unexpected loop shape: as the multiplier increases, detection probability and false alarm rate both increase until a limit, and then both contract. Consequently, two detection probabilities correspond to the same false alarm rate, and the higher is achieved at a lower multiplier, which is the desired operating point. Using the Chebyshev s inequality we analytically confirm this shape. Then, we present two improved network-SPRT methods by (a) using the threshold off-set as a weighting factor for the binary decisions from individual detectors in a weighted majority voting fusion rule, and (b) applying a composite SPRT derived using measurements from all counters.
Special Higher Education Bibliography.
ERIC Educational Resources Information Center
Weinberg, Meyer
1982-01-01
Cites works relevant to the higher education of Blacks and minority group members. Lists references alphabetically under the following headings: (1) financial aid on the campus; (2) Chicanos in higher education; and (3) race and equality on California campuses. (GC)
Designing Stochastic Optimization Algorithms for Real-world Applications
NASA Astrophysics Data System (ADS)
Someya, Hiroshi; Handa, Hisashi; Koakutsu, Seiichi
This article presents a review of recent advances in stochastic optimization algorithms. Novel algorithms achieving highly adaptive and efficient searches, theoretical analyses to deepen our understanding of search behavior, successful implementation on parallel computers, attempts to build benchmark suites for industrial use, and techniques applied to real-world problems are included. A list of resources is provided.
Comparison of Beam-Based Alignment Algorithms for the ILC
Smith, J.C.; Gibbons, L.; Patterson, J.R.; Rubin, D.L.; Sagan, D.; Tenenbaum, P.; /SLAC
2006-03-15
The main linac of the International Linear Collider (ILC) requires more sophisticated alignment techniques than those provided by survey alone. Various Beam-Based Alignment (BBA) algorithms have been proposed to achieve the desired low emittance preservation. Dispersion Free Steering, Ballistic Alignment and the Kubo method are compared. Alignment algorithms are also tested in the presence of an Earth-like stray field.
Improved Bat Algorithm Applied to Multilevel Image Thresholding
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
Improved bat algorithm applied to multilevel image thresholding.
Alihodzic, Adis; Tuba, Milan
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
Spotlight on Higher Education.
ERIC Educational Resources Information Center
Klinger, Donna; Iwanowski, Jay
1997-01-01
A number of current issues and initiatives in higher education are highlighted, including impending reauthorization of the Higher Education Act, the need for advocacy of higher education in public policy arenas, a University of Florida program combining accountability and institutional autonomy, and institutional compliance with nonresident alien…
Higher Education Exchange, 2007
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2007-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" discuss the concept of growing public scholars; each contribution incorporates a student component. Articles include: (1) "Foreword"…
The Higher Education Enterprise.
ERIC Educational Resources Information Center
Ottinger, Cecilia A.
1991-01-01
Higher education not only contributes to the development of the human resources and intellectual betterment of the nation but is also a major economic enterprise. This research brief reviews and highlights data on the size and growth of higher education and illustrates how higher education institutions are preparing the future labor force. It…
Du, Guanyao; Yu, Jianjun
2016-01-01
This paper investigates the system achievable rate for the multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) system with an energy harvesting (EH) relay. Firstly we propose two protocols, time switching-based decode-and-forward relaying (TSDFR) and a flexible power splitting-based DF relaying (PSDFR) protocol by considering two practical receiver architectures, to enable the simultaneous information processing and energy harvesting at the relay. In PSDFR protocol, we introduce a temporal parameter to describe the time division pattern between the two phases which makes the protocol more flexible and general. In order to explore the system performance limit, we discuss the system achievable rate theoretically and formulate two optimization problems for the proposed protocols to maximize the system achievable rate. Since the problems are non-convex and difficult to solve, we first analyze them theoretically and get some explicit results, then design an augmented Lagrangian penalty function (ALPF) based algorithm for them. Numerical results are provided to validate the accuracy of our analytical results and the effectiveness of the proposed ALPF algorithm. It is shown that, PSDFR outperforms TSDFR to achieve higher achievable rate in such a MIMO-OFDM relaying system. Besides, we also investigate the impacts of the relay location, the number of antennas and the number of subcarriers on the system performance. Specifically, it is shown that, the relay position greatly affects the system performance of both protocols, and relatively worse achievable rate is achieved when the relay is placed in the middle of the source and the destination. This is different from the MIMO-OFDM DF relaying system without EH. Moreover, the optimal factor which indicates the time division pattern between the two phases in the PSDFR protocol is always above 0.8, which means that, the common division of the total transmission time into two equal phases in
NASA Astrophysics Data System (ADS)
Li, Lin; Kuai, Xi
2014-11-01
Generating a triangulated irregular network (TIN) from contour maps is the most commonly used approach to build Digital Elevation Models (DEMs) for geo-databases. A well-known problem when building a TIN is that many pan slope triangles (or PSTs) may emerge from the vertices of contour lines. Those triangles should be eliminated from the TIN by adding additional terrain points when refining the local TIN. There are many methods and algorithms available for eliminating PSTs in a TIN, but their performances may not satisfy the requirements of some applications where efficiency rather than completeness is critical. This paper investigates commonly-used processes for eliminating PSTs and puts forward a new algorithm, referred to as ‘dichotomizing' interpolation algorithm, to achieve a higher efficiency than from the conventional ‘skeleton' extraction algorithm. Its better performance comes from reducing the number of the additional interpolated points to only those that are sufficient and necessary for eliminating PSTs. This goal is reached by dichotomizing PST polygons iteratively and locating additional points in the geometric centers of the polygons. This study verifies, both theoretically and experimentally, the higher efficiency of this new dichotomizing algorithm and also demonstrates its reliability for building DEMs in terms of accuracy for estimating terrain surface elevation.
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor
Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing
2015-01-01
Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University’s datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy. PMID:26287198
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.
Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing
2015-01-01
Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy. PMID:26287198
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
An improved SIFT algorithm based on KFDA in image registration
NASA Astrophysics Data System (ADS)
Chen, Peng; Yang, Lijuan; Huo, Jinfeng
2016-03-01
As a kind of stable feature matching algorithm, SIFT has been widely used in many fields. In order to further improve the robustness of the SIFT algorithm, an improved SIFT algorithm with Kernel Discriminant Analysis (KFDA-SIFT) is presented for image registration. The algorithm uses KFDA to SIFT descriptors for feature extraction matrix, and uses the new descriptors to conduct the feature matching, finally chooses RANSAC to deal with the matches for further purification. The experiments show that the presented algorithm is robust to image changes in scale, illumination, perspective, expression and tiny pose with higher matching accuracy.
[Research Reports on Academic Achievement.
ERIC Educational Resources Information Center
Latts, Sander; And Others
1969-01-01
Four counselors studied the relation between achievement and choice of major, achievement and motivation, counseling and motivation, and achievement and employment. To see if those with definite majors or career choices in mind did better than those without, 300 students were tested according to the certainty of their choice. No significant…
Cherokee Culture and School Achievement.
ERIC Educational Resources Information Center
Brown, Anthony D.
1980-01-01
Compares the effect of cooperative and competitive behaviors of Cherokee and Anglo American elementary school students on academic achievement. Suggests changes in teaching techniques and lesson organization that might raise academic achievement while taking into consideration tribal traditions that limit scholastic achievement in an…
Parallel scheduling algorithms
Dekel, E.; Sahni, S.
1983-01-01
Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.
Developmental Algorithms Have Meaning!
ERIC Educational Resources Information Center
Green, John
1997-01-01
Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…
Algorithms for skiascopy measurement automatization
NASA Astrophysics Data System (ADS)
Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta
2014-10-01
Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.
Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms
NASA Astrophysics Data System (ADS)
Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.
2016-04-01
The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.
Students' Achievement Goals, Learning-Related Emotions and Academic Achievement.
Lüftenegger, Marko; Klug, Julia; Harrer, Katharina; Langer, Marie; Spiel, Christiane; Schober, Barbara
2016-01-01
In the present research, the recently proposed 3 × 2 model of achievement goals is tested and associations with achievement emotions and their joint influence on academic achievement are investigated. The study was conducted with 388 students using the 3 × 2 Achievement Goal Questionnaire including the six proposed goal constructs (task-approach, task-avoidance, self-approach, self-avoidance, other-approach, other-avoidance) and the enjoyment and boredom scales from the Achievement Emotion Questionnaire. Exam grades were used as an indicator of academic achievement. Findings from CFAs provided strong support for the proposed structure of the 3 × 2 achievement goal model. Self-based goals, other-based goals and task-approach goals predicted enjoyment. Task-approach goals negatively predicted boredom. Task-approach and other-approach predicted achievement. The indirect effects of achievement goals through emotion variables on achievement were assessed using bias-corrected bootstrapping. No mediation effects were found. Implications for educational practice are discussed. PMID:27199836
Students’ Achievement Goals, Learning-Related Emotions and Academic Achievement
Lüftenegger, Marko; Klug, Julia; Harrer, Katharina; Langer, Marie; Spiel, Christiane; Schober, Barbara
2016-01-01
In the present research, the recently proposed 3 × 2 model of achievement goals is tested and associations with achievement emotions and their joint influence on academic achievement are investigated. The study was conducted with 388 students using the 3 × 2 Achievement Goal Questionnaire including the six proposed goal constructs (task-approach, task-avoidance, self-approach, self-avoidance, other-approach, other-avoidance) and the enjoyment and boredom scales from the Achievement Emotion Questionnaire. Exam grades were used as an indicator of academic achievement. Findings from CFAs provided strong support for the proposed structure of the 3 × 2 achievement goal model. Self-based goals, other-based goals and task-approach goals predicted enjoyment. Task-approach goals negatively predicted boredom. Task-approach and other-approach predicted achievement. The indirect effects of achievement goals through emotion variables on achievement were assessed using bias-corrected bootstrapping. No mediation effects were found. Implications for educational practice are discussed. PMID:27199836
Efficient Record Linkage Algorithms Using Complete Linkage Clustering
Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar
2016-01-01
Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604
Achieving Sub-Design Level Contrast for Coronagraphs with Deformable Mirrors
NASA Astrophysics Data System (ADS)
Eldorado Riggs, A. J.; Groff, T. D.; Carlotti, A.; Kasdin, N. J.
2013-01-01
Coronagraphs for space-based detection of earth-like exoplanets are normally designed assuming perfect optics. One or more deformable mirrors (DMs) are then utilized to correct for these aberrations and recover the lost contrast. We demonstrate a new, unified approach in which the coronagraph needs a design contrast only on the order of the errors in the optics. The DMs can then be used to achieve higher contrast by treating small areas of the coronagraph as amplitude errors in the system. This approach eases design and manufacturing constraints on coronagraphs and yields higher throughput designs. Our initial simulations show that a single DM conjugate to a shaped pupil coronagraph can achieve a single-sided dark hole higher in contrast than the shaped pupil is designed for. Future work will focus on simulating double-sided dark holes with two DMs non-conjugate to the pupil plane. This will enable experiments performed in the Princeton High Contrast Imaging (HCIL) Lab with our two Boston Micromachines Corp. kilo-DMs. Symmetric dark holes have already been generated at the HCIL using the Stroke Minimization algorithm and a high contrast shaped pupil in monochromatic and broadband light. Experiments with the unified shaped pupil-DM system will utilize the Kalman filter estimator recently developed in the HCIL for focal plane wavefront correction.
Lightning detection and exposure algorithms for smartphones
NASA Astrophysics Data System (ADS)
Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining
2015-05-01
This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.
Quantum algorithms and the finite element method
NASA Astrophysics Data System (ADS)
Montanaro, Ashley; Pallister, Sam
2016-03-01
The finite element method is used to approximately solve boundary value problems for differential equations. The method discretizes the parameter space and finds an approximate solution by solving a large system of linear equations. Here we investigate the extent to which the finite element method can be accelerated using an efficient quantum algorithm for solving linear equations. We consider the representative general question of approximately computing a linear functional of the solution to a boundary value problem and compare the quantum algorithm's theoretical performance with that of a standard classical algorithm—the conjugate gradient method. Prior work claimed that the quantum algorithm could be exponentially faster but did not determine the overall classical and quantum run times required to achieve a predetermined solution accuracy. Taking this into account, we find that the quantum algorithm can achieve a polynomial speedup, the extent of which grows with the dimension of the partial differential equation. In addition, we give evidence that no improvement of the quantum algorithm can lead to a superpolynomial speedup when the dimension is fixed and the solution satisfies certain smoothness properties.
LCD motion blur: modeling, analysis, and algorithm.
Chan, Stanley H; Nguyen, Truong Q
2011-08-01
Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596
ERIC Educational Resources Information Center
Carter, Dorinda J.
2008-01-01
In this article, Dorinda Carter examines the embodiment of a critical race achievement ideology in high-achieving black students. She conducted a yearlong qualitative investigation of the adaptive behaviors that nine high-achieving black students developed and employed to navigate the process of schooling at an upper-class, predominantly white,…
Visualizing higher order finite elements. Final report
Thompson, David C; Pebay, Philippe Pierre
2005-11-01
This report contains an algorithm for decomposing higher-order finite elements into regions appropriate for isosurfacing and proves the conditions under which the algorithm will terminate. Finite elements are used to create piecewise polynomial approximants to the solution of partial differential equations for which no analytical solution exists. These polynomials represent fields such as pressure, stress, and momentum. In the past, these polynomials have been linear in each parametric coordinate. Each polynomial coefficient must be uniquely determined by a simulation, and these coefficients are called degrees of freedom. When there are not enough degrees of freedom, simulations will typically fail to produce a valid approximation to the solution. Recent work has shown that increasing the number of degrees of freedom by increasing the order of the polynomial approximation (instead of increasing the number of finite elements, each of which has its own set of coefficients) can allow some types of simulations to produce a valid approximation with many fewer degrees of freedom than increasing the number of finite elements alone. However, once the simulation has determined the values of all the coefficients in a higher-order approximant, tools do not exist for visual inspection of the solution. This report focuses on a technique for the visual inspection of higher-order finite element simulation results based on decomposing each finite element into simplicial regions where existing visualization algorithms such as isosurfacing will work. The requirements of the isosurfacing algorithm are enumerated and related to the places where the partial derivatives of the polynomial become zero. The original isosurfacing algorithm is then applied to each of these regions in turn.
Speckle imaging algorithms for planetary imaging
Johansson, E.
1994-11-15
I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.
Detection algorithm for multiple rice seeds images
NASA Astrophysics Data System (ADS)
Cheng, F.; Ying, Y. B.
2006-10-01
The objective of this research is to develop a digital image analysis algorithm for detection of multiple rice seeds images. The rice seeds used for this study involved a hybrid rice seed variety. Images of multiple rice seeds were acquired with a machine vision system for quality inspection of bulk rice seeds, which is designed to inspect rice seeds on a rotating disk with a CCD camera. Combining morphological operations and parallel processing gave improvements in accuracy, and a reduction in computation time. Using image features selected based on classification ability; a highly acceptable defects classification was achieved when the algorithm was implemented for all the samples to test the adaptability.
Digital control algorithms for microgravity isolation systems
NASA Technical Reports Server (NTRS)
Sinha, A.; Wang, Y.-P.
1993-01-01
New digital control algorithms have been developed to achieve the desired transmissibility function for a microgravity isolation system. Two approaches have been presented for the controller design in the context of a single degree of freedom system for which an attractive electromagnet is used as the actuator. The relative displacement and the absolute acceleration of the mass have been used as feedback signals. The results from numerical studies are presented. It has been found that the resulting transmissibility is quite close to the desired function. Also, the maximum coil currents required by new algorithms are smaller than the maximum current demanded by the previously proposed lead/lag method.
Li, G; Sanchez, V; Nagaraj, P C S B; Khan, S; Rajpoot, N
2015-12-01
We propose a novel multitarget tracking framework for Myosin VI protein molecules in total internal reflection fluorescence microscopy sequences which integrates an extended Hungarian algorithm with an interacting multiple model filter. The extended Hungarian algorithm, which is a linear assignment problem based method, helps to solve measurement assignment and spot association problems commonly encountered when dealing with multiple targets, although a two-motion model interacting multiple model filter increases the tracking accuracy by modelling the nonlinear dynamics of Myosin VI protein molecules on actin filaments. The evaluation of our tracking framework is conducted on both real and synthetic total internal reflection fluorescence microscopy sequences. The results show that the framework achieves higher tracking accuracies compared to the state-of-the-art tracking methods, especially for sequences with high spot density. PMID:26259144
A hierarchical exact accelerated stochastic simulation algorithm
Orendorff, David; Mjolsness, Eric
2012-01-01
A new algorithm, “HiER-leap” (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled “blocks” and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms. PMID:23231214
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
Parallel algorithms and architectures
Albrecht, A.; Jung, H.; Mehlhorn, K.
1987-01-01
Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
UWB Tracking Algorithms: AOA and TDOA
NASA Technical Reports Server (NTRS)
Ni, Jianjun David; Arndt, D.; Ngo, P.; Gross, J.; Refford, Melinda
2006-01-01
Ultra-Wideband (UWB) tracking prototype systems are currently under development at NASA Johnson Space Center for various applications on space exploration. For long range applications, a two-cluster Angle of Arrival (AOA) tracking method is employed for implementation of the tracking system; for close-in applications, a Time Difference of Arrival (TDOA) positioning methodology is exploited. Both AOA and TDOA are chosen to utilize the achievable fine time resolution of UWB signals. This talk presents a brief introduction to AOA and TDOA methodologies. The theoretical analysis of these two algorithms reveal the affecting parameters impact on the tracking resolution. For the AOA algorithm, simulations show that a tracking resolution less than 0.5% of the range can be achieved with the current achievable time resolution of UWB signals. For the TDOA algorithm used in close-in applications, simulations show that the (sub-inch) high tracking resolution is achieved with a chosen tracking baseline configuration. The analytical and simulated results provide insightful guidance for the UWB tracking system design.
ERIC Educational Resources Information Center
Hayes, Dianne
2012-01-01
Higher education institutions are in the battle of a lifetime as they are coping with political and economic uncertainties, threats to federal aid, declining state support, higher tuition rates and increased competition from for-profit institutions. Amid all these challenges, these institutions are pressed to keep up with technological demands,…
ERIC Educational Resources Information Center
Arkansas State Dept. of Higher Education, Little Rock.
This report presents information about higher education in Arkansas. Arkansas is 49th in the United States in the number of citizens over the age of 25 with a baccalaureate or higher degree. Arkansas faces shortages of qualified teachers and nurses in regions of the state at a time when the number of graduates in these professions is declining…
Minorities in Higher Education.
ERIC Educational Resources Information Center
Justiz, Manuel J., Ed.; And Others
This book presents 19 papers on efforts to increase the participation of members of minority groups in higher education. The papers are: (1) "Demographic Trends and the Challenges to American Higher Education" (Manuel Justiz); (2) "Three Realities: Minority Life in the United States--The Struggle for Economic Equity (adapted by Don M. Blandin);…
Reimagining Christian Higher Education
ERIC Educational Resources Information Center
Hulme, E. Eileen; Groom, David E., Jr.; Heltzel, Joseph M.
2016-01-01
The challenges facing higher education continue to mount. The shifting of the U.S. ethnic and racial demographics, the proliferation of advanced digital technologies and data, and the move from traditional degrees to continuous learning platforms have created an unstable environment to which Christian higher education must adapt in order to remain…
ERIC Educational Resources Information Center
Ruben, Brent D., Ed.
This volume contains 21 new and classic papers and readings on quality philosophies and concepts, first, as they have been applied in business and industry but primarily as they relate to and can be applied in higher education. The introduction is titled "The Quality Approach in Higher Education: Context and Concepts for Change" by Brent D. Ruben.…
Higher Education Exchange 2006
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2006-01-01
Contributors to this issue of the Higher Education Exchange debate the issues around knowledge production, discuss the acquisition of deliberative skills for democracy, and examine how higher education prepares, or does not prepare, students for citizenship roles. Articles include: (1) "Foreword" (Deborah Witte); (2) "Knowledge, Judgment and…
ERIC Educational Resources Information Center
MCGRATH, EARL J.
THIS DOCUMENT IS A REPORT ON A GROUP INQUIRY INTO THE SUBSTANCE AND IMPLICATIONS OF UNIVERSAL HIGHER EDUCATION. ELEVEN CHAPTERS ARE PAPERS PRESENTED AT A CONFERENCE HELD UNDER THE AUSPICES OF THE INSTITUTE OF HIGHER EDUCATION, TEACHERS COLLEGE, COLUMBIA UNIVERSITY, IN PUERTO RICO, NOVEMBER 15-21, 1964, FORECASTING THE FORM AND MISSION OF AMERICAN…
Reinventing Continuing Higher Education
ERIC Educational Resources Information Center
Walshok, Mary Lindenstein
2012-01-01
Re-inventing continuing higher education is about finding ways to be a more central player in a region's civic, cultural, and economic life as well as in the education of individuals for work and citizenship. Continuing higher education will require data gathering, analytical tools, convening authority, interpretive skills, new models of delivery,…
ERIC Educational Resources Information Center
Bank, Barbara J., Ed.
2011-01-01
This comprehensive, encyclopedic review explores gender and its impact on American higher education across historical and cultural contexts. Challenging recent claims that gender inequities in U.S. higher education no longer exist, the contributors--leading experts in the field--reveal the many ways in which gender is embedded in the educational…
Consumerism in Higher Education
ERIC Educational Resources Information Center
Green, Mark
1973-01-01
In considering consumerism in higher education, the student becomes the consumer,'' the university the corporation,'' and higher education the education industry.'' Other members of the education fraternity become investors, management, workers, direct consumers, and indirect consumers. This article proposes that it behooves the student to…
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2009-01-01
This volume begins with an essay by Noelle McAfee, a contributor who is familiar to readers of Higher Education Exchange (HEX). She reiterates Mathews' argument regarding the disconnect between higher education's sense of engagement and the public's sense of engagement, and suggests a way around the epistemological conundrum of "knowledge produced…
Higher Education Exchange, 2009
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2009-01-01
This volume begins with an essay by Noelle McAfee, a contributor who is familiar to readers of Higher Education Exchange (HEX). She reiterates Kettering's president David Mathews' argument regarding the disconnect between higher education's sense of engagement and the public's sense of engagement, and suggests a way around the epistemological…
The Mechanics of Human Achievement
Duckworth, Angela L.; Eichstaedt, Johannes C.; Ungar, Lyle H.
2015-01-01
Countless studies have addressed why some individuals achieve more than others. Nevertheless, the psychology of achievement lacks a unifying conceptual framework for synthesizing these empirical insights. We propose organizing achievement-related traits by two possible mechanisms of action: Traits that determine the rate at which an individual learns a skill are talent variables and can be distinguished conceptually from traits that determine the effort an individual puts forth. This approach takes inspiration from Newtonian mechanics: achievement is akin to distance traveled, effort to time, skill to speed, and talent to acceleration. A novel prediction from this model is that individual differences in effort (but not talent) influence achievement (but not skill) more substantially over longer (rather than shorter) time intervals. Conceptualizing skill as the multiplicative product of talent and effort, and achievement as the multiplicative product of skill and effort, advances similar, but less formal, propositions by several important earlier thinkers. PMID:26236393
a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data
NASA Astrophysics Data System (ADS)
Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.
2015-04-01
Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.
Unmet Promise: Raising Minority Achievement. The Achievement Gap.
ERIC Educational Resources Information Center
Johnston, Robert C.; Viadero, Debra
2000-01-01
This first in a four-part series on why academic achievement gaps persist discusses how to raise minority achievement. It explains how earlier progress in closing the gap has stalled, while at the same time, the greater diversity of student populations and the rapid growth of the Hispanic population and of other ethnic groups have reshaped the…
To Achieve or Not to Achieve: The Question of Women.
ERIC Educational Resources Information Center
Gilmore, Beatrice
Questionnaire and projective data from 323 women aged 18 to 50 were analyzed in order to study the relationships of need achievement and motive to avoid success to age, sex role ideology, and stage in the family cycle. Family background and educational variables were also considered. Level of need achievement was found to be significantly related…
Mathematics Achievement in High- and Low-Achieving Secondary Schools
ERIC Educational Resources Information Center
Mohammadpour, Ebrahim; Shekarchizadeh, Ahmadreza
2015-01-01
This paper identifies the amount of variance in mathematics achievement in high- and low-achieving schools that can be explained by school-level factors, while controlling for student-level factors. The data were obtained from 2679 Iranian eighth graders who participated in the 2007 Trends in International Mathematics and Science Study. Of the…
An, Lin; Shen, Tueng T; Wang, Ruikang K
2011-10-01
This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm(2) with single scan and 7 × 8 mm(2) for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm(2) with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration. PMID:22029360
NASA Astrophysics Data System (ADS)
An, Lin; Shen, Tueng T.; Wang, Ruikang K.
2011-10-01
This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.
Phase unwrapping algorithms in laser propagation simulation
NASA Astrophysics Data System (ADS)
Du, Rui; Yang, Lijia
2013-08-01
Currently simulating on laser propagation in atmosphere usually need to deal with beam in strong turbulence, which may lose a part of information via Fourier Transform to simulate the transmission, makes the phase of beam as a 2-D array wrap by 2π . An effective unwrapping algorithm is needed for continuing result and faster calculation. The unwrapping algorithms in atmospheric propagation are similar to the unwrapping algorithm in radar or 3-D surface rebuilding, but not the same. In this article, three classic unwrapping algorithms: the block least squares (BLS), mask-cut (MCUT), and the Flynn's minimal discontinuity algorithm (FMD) are tried in wave-front reconstruction simulation. Each of those algorithms are tested 100 times in 6 same conditions, including low(64x64), medium(128x128), and high(256x256) resolution phase array, with and without noises. Compared the results, the conclusions are delivered as follows. The BLS-based algorithm is the fastest, and the result is acceptable in low resolution environment without noise. The MCUT are higher in accuracy, though they are slower with the array resolution increased, and it is sensitive to noise, resulted in large area errors. Flynn's algorithm has the better accuracy, and it occupies large memory in calculation. After all, the article delivered a new algorithm that based on Active on Vertex (AOV) Network, to build a logical graph to cut the search space then find minimal discontinuity solution. The AOV is faster than MCUT in dealing with high resolution phase arrays, and better accuracy as FMD that has been tested.
A survey of DNA motif finding algorithms
Das, Modan K; Dai, Ho-Kwok
2007-01-01
Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of
Neural Network Algorithm for Particle Loading
J. L. V. Lewandowski
2003-04-25
An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.
Affective Processes and Academic Achievement.
ERIC Educational Resources Information Center
Feshbach, Norma Deitch; Feshbach, Seymour
1987-01-01
Data indicate that for girls, affective dispositional factors (empathy, depressive affectivity, aggression, and self-concept) are intimately linked to cognitive development and academic achievement. (PCB)
Attribution theory in science achievement
NASA Astrophysics Data System (ADS)
Craig, Martin
Recent research reveals consistent lags in American students' science achievement scores. Not only are the scores lower in the United States compared to other developed nations, but even within the United States, too many students are well below science proficiency scores for their grade levels. The current research addresses this problem by examining potential malleable factors that may predict science achievement in twelfth graders using 2009 data from the National Assessment of Educational Progress (NAEP). Principle component factor analysis was conducted to determine the specific items that contribute to each overall factor. A series of multiple regressions were then analyzed and formed the predictive value of each of these factors for science achievement. All significant factors were ultimately examined together (also using multiple regression) to determine the most powerful predictors of science achievement, identifying factors that predict science achievement, the results of which suggested interventions to strengthen students' science achievement scores and encourage persistence in the sciences at the college level and beyond. Although there is a variety of research highlighting how students in the US are falling behind other developing nations in science and math achievement, as yet, little research has addressed ways of intervening to address this gap. The current research is a starting point, seeking to identify malleable factors that contribute to science achievement. More specifically, this research examined the types of attributions that predict science achievement in twelfth grade students.
Color sorting algorithm based on K-means clustering algorithm
NASA Astrophysics Data System (ADS)
Zhang, BaoFeng; Huang, Qian
2009-11-01
In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.
Parallelization of the Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.
A Simple Calculator Algorithm.
ERIC Educational Resources Information Center
Cook, Lyle; McWilliam, James
1983-01-01
The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
NASA Astrophysics Data System (ADS)
Feigin, G.; Ben-Yosef, N.
1983-10-01
A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Algorithmically specialized parallel computers
Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.
1985-01-01
This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.
Memetic algorithm-based multi-objective coverage optimization for wireless sensor networks.
Chen, Zhi; Li, Shuai; Yue, Wenjing
2014-01-01
Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms. PMID:25360579
Focusing through a turbid medium by amplitude modulation with genetic algorithm
NASA Astrophysics Data System (ADS)
Dai, Weijia; Peng, Ligen; Shao, Xiaopeng
2014-05-01
Multiple scattering of light in opaque materials such as white paint and human tissue forms a volume speckle field, will greatly reduce the imaging depth and degrade the imaging quality. A novel approach is proposed to focus light through a turbid medium using amplitude modulation with genetic algorithm (GA) from speckle patterns. Compared with phase modulation method, amplitude modulation approach, in which the each element of spatial light modulator (SLM) is either zero or one, is much easier to achieve. Theoretical and experimental results show that, the advantage of GA is more suitable for low the signal to noise ratio (SNR) environments in comparison to the existing amplitude control algorithms such as binary amplitude modulation. The circular Gaussian distribution model and Rayleigh Sommerfeld diffraction theory are employed in our simulations to describe the turbid medium and light propagation between optical devices, respectively. It is demonstrated that the GA technique can achieve a higher overall enhancement, and converge much faster than others, and outperform all algorithms at high noise. Focusing through a turbid medium has potential in the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Memetic Algorithm-Based Multi-Objective Coverage Optimization for Wireless Sensor Networks
Chen, Zhi; Li, Shuai; Yue, Wenjing
2014-01-01
Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms. PMID:25360579
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Parallel algorithms and architecture for computation of manipulator forward dynamics
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel computation of manipulator forward dynamics is investigated. Considering three classes of algorithms for the solution of the problem, that is, the O(n), the O(n exp 2), and the O(n exp 3) algorithms, parallelism in the problem is analyzed. It is shown that the problem belongs to the class of NC and that the time and processors bounds are of O(log2/2n) and O(n exp 4), respectively. However, the fastest stable parallel algorithms achieve the computation time of O(n) and can be derived by parallelization of the O(n exp 3) serial algorithms. Parallel computation of the O(n exp 3) algorithms requires the development of parallel algorithms for a set of fundamentally different problems, that is, the Newton-Euler formulation, the computation of the inertia matrix, decomposition of the symmetric, positive definite matrix, and the solution of triangular systems. Parallel algorithms for this set of problems are developed which can be efficiently implemented on a unique architecture, a triangular array of n(n+2)/2 processors with a simple nearest-neighbor interconnection. This architecture is particularly suitable for VLSI and WSI implementations. The developed parallel algorithm, compared to the best serial O(n) algorithm, achieves an asymptotic speedup of more than two orders-of-magnitude in the computation the forward dynamics.
NASA Astrophysics Data System (ADS)
Li, Jinsha; Li, Junmin
2016-07-01
In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.
An enhanced algorithm for multiple sequence alignment of protein sequences using genetic algorithm
Kumar, Manish
2015-01-01
One of the most fundamental operations in biological sequence analysis is multiple sequence alignment (MSA). The basic of multiple sequence alignment problems is to determine the most biologically plausible alignments of protein or DNA sequences. In this paper, an alignment method using genetic algorithm for multiple sequence alignment has been proposed. Two different genetic operators mainly crossover and mutation were defined and implemented with the proposed method in order to know the population evolution and quality of the sequence aligned. The proposed method is assessed with protein benchmark dataset, e.g., BALIBASE, by comparing the obtained results to those obtained with other alignment algorithms, e.g., SAGA, RBT-GA, PRRP, HMMT, SB-PIMA, CLUSTALX, CLUSTAL W, DIALIGN and PILEUP8 etc. Experiments on a wide range of data have shown that the proposed algorithm is much better (it terms of score) than previously proposed algorithms in its ability to achieve high alignment quality. PMID:27065770
An enhanced algorithm for multiple sequence alignment of protein sequences using genetic algorithm.
Kumar, Manish
2015-01-01
One of the most fundamental operations in biological sequence analysis is multiple sequence alignment (MSA). The basic of multiple sequence alignment problems is to determine the most biologically plausible alignments of protein or DNA sequences. In this paper, an alignment method using genetic algorithm for multiple sequence alignment has been proposed. Two different genetic operators mainly crossover and mutation were defined and implemented with the proposed method in order to know the population evolution and quality of the sequence aligned. The proposed method is assessed with protein benchmark dataset, e.g., BALIBASE, by comparing the obtained results to those obtained with other alignment algorithms, e.g., SAGA, RBT-GA, PRRP, HMMT, SB-PIMA, CLUSTALX, CLUSTAL W, DIALIGN and PILEUP8 etc. Experiments on a wide range of data have shown that the proposed algorithm is much better (it terms of score) than previously proposed algorithms in its ability to achieve high alignment quality. PMID:27065770
Sustainability and Higher Education
ERIC Educational Resources Information Center
Hales, David
2008-01-01
People face four fundamental dilemmas, which are essentially moral choices: (1) alleviating poverty; (2) removing the gap between rich and poor; (3) controlling the use of violence for political ends; and (4) changing the patterns of production and consumption and achieving the transition to sustainability. The world in which future generations…
A novel algorithm for blind deconvolution applied to the improvement of radiographic images
NASA Astrophysics Data System (ADS)
de Almeida, Gevaldo L.; Silvani, Maria Ines
2013-05-01
A novel algorithm for blind deconvolution is proposed in this work, which does not require any previous information concerning the image to be unfolded but solely an assumed shape for the PSF. This algorithm, incorporating a Richardson-Lucy unfolding procedure, assesses the overall contrast for each image unfolded with an increasing w, seeking for the highest value. The basic idea behind this concept is that when the spatial resolution of the image is improved, the contrast is improved too, because the pixel overlapping diminishes. Trials with several different images acquired with neutron and gamma-ray transmission radiography have been carried out in order to evaluate the correctness of the proposed algorithm. It has been found that for a steadily increasing w, the overall contrast increases, reaches a maximum and then decreases. The w-value yielding the highest contrast can be achieved after 1 to 3 iterations and further iterations do not affect it. Images deconvoluted with this value, but with a higher number of iterations, exhibit a better quality than their companions deconvoluted with neighbor values, corroborating thus the best w-value. Synthetic images with known resolutions return the same w-values used to degrade them, showing thus the soundness of the proposed algorithm.
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2015-04-01
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the naïve scattering algorithm (no memory access optimization). The tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).
A novel LTE scheduling algorithm for green technology in smart grid.
Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid
2015-01-01
Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application's priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703
A novel sparse coding algorithm for classification of tumors based on gene expression data.
Kolali Khormuji, Morteza; Bazrafkan, Mehrnoosh
2016-06-01
High-dimensional genomic and proteomic data play an important role in many applications in medicine such as prognosis of diseases, diagnosis, prevention and molecular biology, to name a few. Classifying such data is a challenging task due to the various issues such as curse of dimensionality, noise and redundancy. Recently, some researchers have used the sparse representation (SR) techniques to analyze high-dimensional biological data in various applications in classification of cancer patients based on gene expression datasets. A common problem with all SR-based biological data classification methods is that they cannot utilize the topological (geometrical) structure of data. More precisely, these methods transfer the data into sparse feature space without preserving the local structure of data points. In this paper, we proposed a novel SR-based cancer classification algorithm based on gene expression data that takes into account the geometrical information of all data. Precisely speaking, we incorporate the local linear embedding algorithm into the sparse coding framework, by which we can preserve the geometrical structure of all data. For performance comparison, we applied our algorithm on six tumor gene expression datasets, by which we demonstrate that the proposed method achieves higher classification accuracy than state-of-the-art SR-based tumor classification algorithms. PMID:26337064
A Novel LTE Scheduling Algorithm for Green Technology in Smart Grid
Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid
2015-01-01
Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application’s priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
Lyakh, Dmitry I.
2015-01-05
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
Lyakh, Dmitry I.
2015-01-05
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typicallymore » appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).« less
Staged optimization algorithms based MAC dynamic bandwidth allocation for OFDMA-PON
NASA Astrophysics Data System (ADS)
Liu, Yafan; Qian, Chen; Cao, Bingyao; Dun, Han; Shi, Yan; Zou, Junni; Lin, Rujian; Wang, Min
2016-06-01
Orthogonal frequency division multiple access passive optical network (OFDMA-PON) has being considered as a promising solution for next generation PONs due to its high spectral efficiency and flexible bandwidth allocation scheme. In order to take full advantage of these merits of OFDMA-PON, a high-efficiency medium access control (MAC) dynamic bandwidth allocation (DBA) scheme is needed. In this paper, we propose two DBA algorithms which can act on two different stages of a resource allocation process. To achieve higher bandwidth utilization and ensure the equity of ONUs, we propose a DBA algorithm based on frame structure for the stage of physical layer mapping. Targeting the global quality of service (QoS) of OFDMA-PON, we propose a full-range DBA algorithm with service level agreement (SLA) and class of service (CoS) for the stage of bandwidth allocation arbitration. The performance of the proposed MAC DBA scheme containing these two algorithms is evaluated using numerical simulations. Simulations of a 15 Gbps network with 1024 sub-carriers and 32 ONUs demonstrate the maximum network throughput of 14.87 Gbps and the maximum packet delay of 1.45 ms for the highest priority CoS under high load condition.
General Achievement Trends: South Dakota
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
The Process of Science Achievement.
ERIC Educational Resources Information Center
Papanastasiou, Constantinos; Papanastasiou, Elena C.
2002-01-01
Investigates the science achievement of 8th grade students in Cyprus by using a structural equation model with three exogenous constructs--family's educational background, reinforcements, and school climate, and three endogenous constructs--teaching, student attitudes, and achievement. Proposes a model for the effects of family, school, student…
Examination Regimes and Student Achievement
ERIC Educational Resources Information Center
Cosentino de Cohen, Clemencia
2010-01-01
Examination regimes at the end of secondary school vary greatly intra- and cross-nationally, and in recent years have undergone important reforms often geared towards increasing student achievement. This research presents a comparative analysis of the relationship between examination regimes and student achievement in the OECD. Using a micro…
School Size and Student Achievement
ERIC Educational Resources Information Center
Riggen, Vicki
2013-01-01
This study examined whether a relationship between high school size and student achievement exists in Illinois public high schools in reading and math, as measured by the Prairie State Achievement Exam (PSAE), which is administered to all Illinois 11th-grade students. This study also examined whether the factors of socioeconomic status, English…
Motivational Factors in School Achievement.
ERIC Educational Resources Information Center
Maehr, Martin L.
A summary is presented of the literature on motivation relating to achievement in the classroom. Special attention is given to how values, ideology, and various cultural patterns may serve to enhance motivation to achieve in the classroom. In considering what determines motivation and personal investment in educational pursuits, the following…
General Achievement Trends: New Jersey
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: North Carolina
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
Perils of Standardized Achievement Testing
ERIC Educational Resources Information Center
Haladyna, Thomas M.
2006-01-01
This article argues that the validity of standardized achievement test-score interpretation and use is problematic; consequently, confidence and trust in such test scores may often be unwarranted. The problem is particularly severe in high-stakes situations. This essay provides a context for understanding standardized achievement testing, then…
Raising Boys' Achievement in Schools.
ERIC Educational Resources Information Center
Bleach, Kevan, Ed.
This book offers insights into the range of strategies and good practice being used to raise the achievement of boys. Case studies by school-based practitioners suggest ideas and measures to address the issue of achievement by boys. The contributions are: (1) "Why the Likely Lads Lag Behind" (Kevan Bleach); (2) "Helping Boys Do Better in Their…
Stress Correlates and Academic Achievement.
ERIC Educational Resources Information Center
Bentley, Donna Anderson; And Others
An ongoing concern for educators is the identification of factors that contribute to or are associated with academic achievement; one such group of variables that has received little attention are those involving stress. The relationship between perceived sources of stress and academic achievement was examined to determine if reactions to stress…
Achievement in Writing Geometry Proofs.
ERIC Educational Resources Information Center
Senk, Sharon L.
In 1981 a nationwide assessment of achievement in writing geometry proofs was conducted by the Cognitive Development and Achievement in Secondary School Geometry project. Over 1,500 students in 11 schools in 5 states participated. This paper describes the sample, instruments, grading procedures, and selected results. Results include: (1) at the…
Teaching the Low Level Achiever.
ERIC Educational Resources Information Center
Salomone, Ronald E., Ed.
1986-01-01
Intended for teachers of the English language arts, the articles in this issue offer suggestions and techniques for teaching the low level achiever. Titles and authors of the articles are as follows: (1) "A Point to Ponder" (Rachel Martin); (2) "Tracking: A Self-Fulfilling Prophecy of Failure for the Low Level Achiever" (James Christopher Davis);…
Predicting Achievement in Foreign Language.
ERIC Educational Resources Information Center
Hart, Mary Elizabeth
A review of research is inconclusive concerning the relationship between intelligence and language proficiency. A study of 10th grade students (n=35) examined scores on a high school entrance exam and achievement in foreign language after 1 year of study. Both math and reading showed a significant correlation with foreign language achievement; the…
Superintendent Tenure and Student Achievement
ERIC Educational Resources Information Center
Simpson, Jennifer
2013-01-01
A correlational research design was used to examine the influence of superintendent tenure on student achievement in rural Appalachian Kentucky school districts. Superintendent tenure was compared to aggregated student achievement scores for 2011 and to changes in students' learning outcomes over the course of the superintendents' tenure. The…
NASA Astrophysics Data System (ADS)
Baas, Nils A.
2016-08-01
In this paper, we discuss various philosophical aspects of the hyperstructure concept extending networks and higher categories. By this discussion, we hope to pave the way for applications and further developments of the mathematical theory of hyperstructures.
Forecasting Higher Education's Future.
ERIC Educational Resources Information Center
Boyken, Don; Buck, Tina S.; Kollie, Ellen; Przyborowski, Danielle; Rondinelli, Joseph A.; Hunter, Jeff; Hanna, Jeff
2003-01-01
Offers predictions on trends in higher education to accommodate changing needs, lower budgets, and increased enrollment. They involve campus construction, security, administration, technology, interior design, athletics, and transportation. (EV)
ERIC Educational Resources Information Center
O'Brian, Edward J.
1973-01-01
Describes the 4 basic areas in which institutional marketing can be put to use in higher educational institutions: educational services offered, pricing (tuition), promotion to prospective students, and distribution (extension courses and courses that go to the student). (PG)
Enhanced algorithms for stochastic programming
Krishna, A.S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.
Digital Shaping Algorithms for GODDESS
NASA Astrophysics Data System (ADS)
Lonsdale, Sarah-Jane; Cizewski, Jolie; Ratkiewicz, Andrew; Pain, Steven
2014-09-01
Gammasphere-ORRUBA: Dual Detectors for Experimental Structure Studies (GODDESS) combines the highly segmented position-sensitive silicon strip detectors of ORRUBA with up to 110 Compton-suppressed HPGe detectors from Gammasphere, for high resolution for particle-gamma coincidence measurements. The signals from the silicon strip detectors have position-dependent rise times, and require different forms of pulse shaping for optimal position and energy resolutions. Traditionally, a compromise was achieved with a single shaping of the signals performed by conventional analog electronics. However, there are benefits to using digital acquisition of the detector signals, including the ability to apply multiple custom shaping algorithms to the same signal, each optimized for position and energy, in addition to providing a flexible triggering system, and a reduction in rate-limitation due to pile-up. Recent developments toward creating digital signal processing algorithms for GODDESS will be discussed. This work is supported in part by the U.S. D.O.E. and N.S.F.
Hybrid protection algorithms based on game theory in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Guo, Lei; Wu, Jingjing; Hou, Weigang; Liu, Yejun; Zhang, Lincong; Li, Hongming
2011-12-01
With the network size increasing, the optical backbone is divided into multiple domains and each domain has its own network operator and management policy. At the same time, the failures in optical network may lead to a huge data loss since each wavelength carries a lot of traffic. Therefore, the survivability in multi-domain optical network is very important. However, existing survivable algorithms can achieve only the unilateral optimization for profit of either users or network operators. Then, they cannot well find the double-win optimal solution with considering economic factors for both users and network operators. Thus, in this paper we develop the multi-domain network model with involving multiple Quality of Service (QoS) parameters. After presenting the link evaluation approach based on fuzzy mathematics, we propose the game model to find the optimal solution to maximize the user's utility, the network operator's utility, and the joint utility of user and network operator. Since the problem of finding double-win optimal solution is NP-complete, we propose two new hybrid protection algorithms, Intra-domain Sub-path Protection (ISP) algorithm and Inter-domain End-to-end Protection (IEP) algorithm. In ISP and IEP, the hybrid protection means that the intelligent algorithm based on Bacterial Colony Optimization (BCO) and the heuristic algorithm are used to solve the survivability in intra-domain routing and inter-domain routing, respectively. Simulation results show that ISP and IEP have the similar comprehensive utility. In addition, ISP has better resource utilization efficiency, lower blocking probability, and higher network operator's utility, while IEP has better user's utility.
Performance evaluation of imaging seeker tracking algorithm based on multi-features
NASA Astrophysics Data System (ADS)
Li, Yujue; Yan, Jinglong
2011-08-01
The paper presents a new efficient method for performance evaluation of imaging seeker tracking algorithm. The method utilizes multi features which associate with tracking point of each video frame, gets local score(LS) for every feature, and achieves global score(GS) for given tracking algorithm according to the combined strategy. The method can be divided into three steps. In a first step, it extracts evaluation feature from neighbor zone of each tracking point. The feature may include tracking error, shape of target, area of target, tracking path, and so on. Then, as to each feature, a local score can be got rely on the number of target which tracked successfully. It uses similarity measurement and experiential threshold between neighbor zone of tracking point and target template to define tracking successful or not. Of course, the number should be 0 or 1 for single target tracking. Finally, it assigns weight for each feature according to the validity grade for the performance. The weights multiply by local scores and normalized between 0 and 1, this gets global score of certain tracking algorithm. By compare the global score of each tracking algorithm as to certain type of scene, it can evaluate the performance of tracking algorithm quantificational. The proposed method nearly covers all tracking error factors which can be introduced into the process of target tracking, so the evaluation result has a higher reliability. Experimental results, obtained with flying video of infrared imaging seeker, and also included several target tracking algorithms, illustrate the performance of target tracking, demonstrate the effectiveness and robustness of the proposed method.
Khan, Rao F. Villarreal-Barajas, Eduardo; Lau, Harold; Liu, Hong-Wei
2014-04-01
Stereotactic body radiotherapy (SBRT) is a curative regimen that uses hypofractionated radiation-absorbed dose to achieve a high degree of local control in early stage non–small cell lung cancer (NSCLC). In the presence of heterogeneities, the dose calculation for the lungs becomes challenging. We have evaluated the dosimetric effect of the recently introduced advanced dose-calculation algorithm, Acuros XB (AXB), for SBRT of NSCLC. A total of 97 patients with early-stage lung cancer who underwent SBRT at our cancer center during last 4 years were included. Initial clinical plans were created in Aria Eclipse version 8.9 or prior, using 6 to 10 fields with 6-MV beams, and dose was calculated using the anisotropic analytic algorithm (AAA) as implemented in Eclipse treatment planning system. The clinical plans were recalculated in Aria Eclipse 11.0.21 using both AAA and AXB algorithms. Both sets of plans were normalized to the same prescription point at the center of mass of the target. A secondary monitor unit (MU) calculation was performed using commercial program RadCalc for all of the fields. For the planning target volumes ranging from 19 to 375 cm{sup 3}, a comparison of MUs was performed for both set of algorithms on field and plan basis. In total, variation of MUs for 677 treatment fields was investigated in terms of equivalent depth and the equivalent square of the field. Overall, MUs required by AXB to deliver the prescribed dose are on an average 2% higher than AAA. Using a 2-tailed paired t-test, the MUs from the 2 algorithms were found to be significantly different (p < 0.001). The secondary independent MU calculator RadCalc underestimates the required MUs (on an average by 4% to 5%) in the lung relative to either of the 2 dose algorithms.
EDITORIAL: Deeper, broader, higher, better?
NASA Astrophysics Data System (ADS)
Dobson, Ken
1998-07-01
Honorary Editor The standard of educational achievement in England and Wales is frequently criticized, and it seems to be an axiom of government that schools and teachers need to be shaken up, kept on a tight rein, copiously inspected, shamed and blamed as required: in general, subjected to the good old approach of: ' Find out what Johnny is doing and tell him to stop.' About the only exception to this somewhat severe attitude is at A-level, where the standard is simply golden. Often, comparisons are made between the performance of, say, English children and that of their coevals in other countries, with different customs, systems, aims and languages. But there has been a recent comparison of standards at A-level with a non-A-level system of pre-university education, in an English-speaking country that both sends students to English universities and accepts theirs into its own, and is, indeed, represented in the UK government at well above the level expected from its ethnical weighting in the population. This semi-foreign country is Scotland. The conclusions of the study are interesting. Scotland has had its own educational system, with `traditional breadth', and managed to escape much of the centralized authoritarianism that we have been through south of the border. It is interesting to note that, while for the past dozen years or so the trend in A-level Physics entries has been downwards, there has been an increase in the take-up of Scottish `Highers'. Highers is a one-year course. Is its popularity due to its being easier than A-level? Scottish students keen enough to do more can move on to the Certificate of Sixth Year Studies, and will shortly be able to upgrade a Higher Level into an Advanced Higher Level. A comparability study [ Comparability Study of Scottish Qualifications and GCE Advanced Levels: Report on Physics January 1998 (free from SQA)] was carried out by the Scottish Qualifications Authority (SQA) with the aim (amongst others) of helping
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized
Development of the DPR algorithms for GPM science construction
NASA Astrophysics Data System (ADS)
Oki, R.; Shimizu, S.; Kubota, T.; Yoshida, N.; Kachi, M.; Iguchi, T.
2009-04-01
The Global Precipitation Measurement (GPM) mission is an international satellite mission for understanding the distribution of global precipitation. It started as a follow-on and expanded mission of the Tropical Rainfall Measuring Mission (TRMM) project. The three-dimensional measurement of precipitation will be achieved by the Dual-frequency Precipitation Radar (DPR) aboard the GPM core-satellite. The DPR, which is being developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), consists of two radars; Ku-band precipitation radar at 13.6GHz (KuPR) and Ka-band radar at 35.55GHz (KaPR). The DPR is expected to advance precipitation science by expanding the coverage of observations to higher latitudes than those of the TRMM PR, measuring snow and light rain by the KaPR, and providing drop size distribution information based on the differential attenuation of echoes at two frequencies. Because the GPM core satellite, similar to the TRMM, is in a sun non-synchronous orbit, we can derive information on diurnal cycle of the precipitation over the mid-latitudes in addition to the Tropics. JAXA will promote and contribute to this advance of science by the development of the DPR algorithms. We are developing synthetic DPR Level 1 data from experimental data of the TRMM PR. Moreover, we are trying to validate the algorithms physically by using data sets synthesized from a cloud resolving model by the Japan Meteorological Agency and the satellite radar simulation algorithm by the NICT.
Surface solar irradiance from SCIAMACHY measurements: algorithm and validation
NASA Astrophysics Data System (ADS)
Wang, P.; Stammes, P.; Mueller, R.
2011-02-01
Broadband surface solar irradiances (SSI) are, for the first time, derived from SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY) satellite measurements. The retrieval algorithm, called FRESCO (Fast REtrieval Scheme for Clouds from Oxygen A band) SSI, is similar to the Heliosat method. In contrast to the standard Heliosat method, the cloud index is replaced by the effective cloud fraction derived from the FRESCO cloud algorithm. The MAGIC (Mesoscale Atmospheric Global Irradiance Code) algorithm is used to calculate clear-sky SSI. The SCIAMACHY SSI product is validated against the globally distributed BSRN (Baseline Surface Radiation Network) measurements and compared with the ISCCP-FD (International Satellite Cloud Climatology Project Flux Dataset) surface shortwave downwelling fluxes (SDF). For one year of data in 2008, the mean difference between the instantaneous SCIAMACHY SSI and the hourly mean BSRN global irradiances is -4 W m-2(-1%) with a standard deviation of 101 W m-2 (20%). The mean difference between the globally monthly mean SCIAMACHY SSI and ISCCP-FD SDF is less than -12 W m-2 (-2%) for every month in 2006 and the standard deviation is 62 W m-2 (12%). The correlation coefficient is 0.93 between SCIAMACHY SSI and BSRN global irradiances and is greater than 0.96 between SCIAMACHY SSI and ISCCP-FD SDF. The evaluation results suggest that the SCIAMACHY SSI product achieves similar mean bias error and root mean square error as the surface solar irradiances derived from polar orbiting satellites with higher spatial resolution.
Surface solar irradiance from SCIAMACHY measurements: algorithm and validation
NASA Astrophysics Data System (ADS)
Wang, P.; Stammes, P.; Mueller, R.
2011-05-01
Broadband surface solar irradiances (SSI) are, for the first time, derived from SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY) satellite measurements. The retrieval algorithm, called FRESCO (Fast REtrieval Scheme for Clouds from the Oxygen A band) SSI, is similar to the Heliosat method. In contrast to the standard Heliosat method, the cloud index is replaced by the effective cloud fraction derived from the FRESCO cloud algorithm. The MAGIC (Mesoscale Atmospheric Global Irradiance Code) algorithm is used to calculate clear-sky SSI. The SCIAMACHY SSI product is validated against globally distributed BSRN (Baseline Surface Radiation Network) measurements and compared with ISCCP-FD (International Satellite Cloud Climatology Project Flux Dataset) surface shortwave downwelling fluxes (SDF). For one year of data in 2008, the mean difference between the instantaneous SCIAMACHY SSI and the hourly mean BSRN global irradiances is -4 W m-2 (-1 %) with a standard deviation of 101 W m-2 (20 %). The mean difference between the globally monthly mean SCIAMACHY SSI and ISCCP-FD SDF is less than -12 W m-2 (-2 %) for every month in 2006 and the standard deviation is 62 W m-2 (12 %). The correlation coefficient is 0.93 between SCIAMACHY SSI and BSRN global irradiances and is greater than 0.96 between SCIAMACHY SSI and ISCCP-FD SDF. The evaluation results suggest that the SCIAMACHY SSI product achieves similar mean bias error and root mean square error as the surface solar irradiances derived from polar orbiting satellites with higher spatial resolution.
Formally Verified Practical Algorithms for Recovery from Loss of Separation
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Munoz, Caesar A.
2009-01-01
In this paper, we develop and formally verify practical algorithms for recovery from loss of separation. The formal verification is performed in the context of a criteria-based framework. This framework provides rigorous definitions of horizontal and vertical maneuver correctness that guarantee divergence and achieve horizontal and vertical separation. The algorithms are shown to be independently correct, that is, separation is achieved when only one aircraft maneuvers, and implicitly coordinated, that is, separation is also achieved when both aircraft maneuver. In this paper we improve the horizontal criteria over our previous work. An important benefit of the criteria approach is that different aircraft can execute different algorithms and implicit coordination will still be achieved, as long as they all meet the explicit criteria of the framework. Towards this end we have sought to make the criteria as general as possible. The framework presented in this paper has been formalized and mechanically verified in the Prototype Verification System (PVS).
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; Rosin, M. S.; Ricketson, L. F.
2013-02-08
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt^{1/2})] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering if and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas
Cohen, B I; Dimits, A; Friedman, A; Caflisch, R
2009-10-29
The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.