Ссылки по теме "Критерии эффективности и качества..." (кроме ИЦ и ИФ)

Автор Сергей Шишкин, 22.06.2006 13:05

« предыдущая - следующая »

0 Пользователей и 1 гость просматривают эту тему.

Сергей Шишкин

22.06.2006 13:05 Последнее редактирование: 18.04.2007 02:22 от Сергей Шишкин
http://www.the-scientist.com/news/display/23683/
Stephen Pincock
UK plans research funding overhaul
Government to debate proposals to rethink university research funding over the next four months
[Published 20th June 2006 04:24 PM GMT]


"British government proposals to overhaul the way academic research is funded could result in a redistribution of money among universities, with top centers such as Cambridge and Durham losing funds, while some newer institutions gain funds, it emerged last week.

Last Tuesday (June 13), Higher Education Minister Bill Rammell said the existing Research Assessment Exercise (RAE), which uses peer review to determine how more than £1bn in funding is divided among universities each year, would be held for the last time in 2008. In its place, the government wants a more straightforward system that focuses on "metrics," or statistical analyses of outcomes. For subjects like science, engineering, and medicine, funders would use levels of external research income to calculate government funding, Rammell said. "


"Cotgreave /director of the Campaign for Science and Engineering/, who describes the RAE as an albatross around the neck of university science, welcomed the system's death-knell. For science and engineering disciplines, a metric-based system would cut swathes of bureaucracy, he said.

But in Cotgreave's view, however, the system could do with an even bigger shake-up. "For science and engineering, if what you want is roughly the same distribution as you get with the RAE then you may as well use a metric-based system," he said. "But if you want is a system that supports the kind of risky research that the funding council grant is supposed to be for then you need something different."

He added that the current system has caused scientists to "do safe research that will do well in the RAE," noting that CaSE plans to argue for systems that provide more support for risky research during the consultation period."



Ссылки к статье:


DFES: Reform of higher education research assessment and funding
http://www.dfes.gov.uk/consultations/

H. Gavaghan, "Mixed reaction to RAE proposals," The Scientist, June 6, 2003.
http://www.the-scientist.com/article/display/21370/

HEFCE: Reform of higher education research assessment and funding
http://www.hefce.ac.uk/research/assessment/reform/

Campaign for Science and Engineering
http://www.savebritishscience.org.uk/about/who/index.htm

Universities UK
http://www.universitiesuk.ac.uk/



Из комментария к статье (Stevan Harnad, American Scientist Open Access Forum):

"Mechanically basing the future RAE rankings on prior funding would just generate a Matthew Effect (making the rich richer and the poor poorer), a self-fulfilling prophecy that is simply equivalent to increasing the amount given to those who were previously funded (and scrapping the RAE altogether, as a further, semi-independent performance evaluator and funding source). What the RAE *should* be planning to do is to look at weighted combinations of all available research performance metrics -- including the many that are correlated, but not so tightly correlated, with prior RAE rankings, such as author/article/book citation counts, article download counts, co-citations (co-cited with and co-cited by, weighted with the citation weight of the co-citer/co-citee), endogamy/exogamy metrics (citations by self or collaborators versus others, within and across disciplines), hub/authority counts (in-cites and out-cites, weighted recursively by the citation's own in-cite and out-cite counts), download and citation growth rates, semantic-web correlates, etc. It would be both arbitrary and absurd to blunt the sensitivity, predictivity and validity of metrics a priori by biassing them toward prior funding alone, which should just be one of a full battery of weighted metrics, adjusted to each discipline and validated against one another (and against human judgment too).

Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, chapter 21. Chandos.
http://eprints.ecs.soton.ac.uk/12369/";



new:

http://www.dfes.gov.uk/consultations/downloadableDocs/RAE%20response%20summary%20250107.doc
DFES CONSULTATION ON THE REFORM OF HIGHER EDUCATION RESEARCH ASSESSMENT AND FUNDING: SUMMARY OF RESPONSES
/Пример хорошо структурированного текста, обобщающего предложения по изменению текущей ситуации и мнений о предполагаемых реформах, поступившего от организаций и отдельных лиц./

Сергей Шишкин

http://iefimov.livejournal.com/30847.html
Игорь Ефимов. Рейтинг научной продуктивности университетов США. - ЖЖ (iefimov), 06:38 pm January 11th, 2007.

Цитировать

Интересная статья в The Chronicle of Higher Education, A New Standard for Measuring Doctoral Programs (Новый стандарт оценки докторантур/аспирантур).

Статья пишет о новом методе оценки и сравнение его со старыми, рейтингом US News & World Report с одной стороны и рейтингом National Research Council. Первый рейтинг регулярно публикуется одноимённым журналом и является библией американских  школьников, ищущуих куда пойти учиться. К науке этот рейтинг не имеет прямого отношения. Он, скорее, расчитан на репутацию, историю, и рыночную оценку диплома. Важный рейтинг для студентов, но не для учёных. Второй рейтинг - чисто научный, и обладает определённой репутацией, но не обновлялся с 1995 года. Вот изобрели новый под названием Faculty Productivity Index. Этот индекс считает статьи, книги, нобелевские премии, гранты и пр. с определённым весом и делит их на число профессоров в данном университете или факультете. Просто и объективно. Результат забавен для одних и "а король то голый" для других.  ................................


ххх (Алексей Колесниченко)

лично мне больше нравится http://www.phds.org/rankings = бОльшая разбивка, причем по отраслям науки и другим детализированным требованиям (допустим, у меня /биолога/ рейтинг получается совсем другой - 1. University of California - Berkeley)

Сергей Шишкин

http://arxiv.org/abs/0805.4650v1
The w-index: A significant improvement of the h-index
Authors: Qiang Wu
(Submitted on 30 May 2008 (this version), latest version 7 Jun 2008 (v2))
Comments:    7 pages, 3 tables
Subjects:    Physics and Society (physics.soc-ph); Data Analysis, Statistics and Probability (physics.data-an)
Cite as:    arXiv:0805.4650v1 [physics.soc-ph]

"I propose a new measure, the w-index, as a particularly simple and useful way to assess the integrated impact of a researcher's work, especially his or her excellent papers. The w-index can be defined as follow: A researcher has index w if w of his/her papers have at least 10w citations each, and the other papers have fewer than 10(w+1) citations."

Download:
* PDF only
http://arxiv.org/pdf/0805.4650v1

Сергей Шишкин


http://www.polit.ru/science/2008/07/23/rubakov.html

Цитировать

Я сам участвовал и участвую во всяких экспертных комитетах разных научных центров мира. Я, например, ездил в теоротдел ЦЕРНа в качестве эксперта. Туда приглашают нескольких человек, спрашивают их мнение по тематикам различных исследований. Как вы оцениваете это и это, ко времени приезда экспертов готовят все материалы, что опубликовано, какие результаты получены. Смотришь, оцениваешь и говоришь: ребята у вас по этой части - провал, и то, чем вы занимаетесь - это позавчерашний день. А эта тематика у вас развивается замечательно, молодцы, поддерживайте. После этого они делают свои выводы.


Сергей Шишкин

http://www.nature.com/nature/journal/v457/n7225/full/457007b.html

Experts still needed
There are good reasons to be suspicious of metric-based research assessment.

Nature 457, 7-8 (1 January 2009) | doi:10.1038/457007b

Published online 31 December 2008

http://www.scientific.ru/dforum/scilife/1231249918 - полный текст и обсуждение