Can we count on journal metrics?

How do you rank science, how do you rate scientists, what kudos do you give their papers and what metrics do you attach to the impact of a paper? They’re questions as old as the scientific literature itself. But, no one has resolved them. Independent organisations and publishers have attempted with the likes of the ISI Impact Factor. Academics weary of the prominent journals and the prominent researchers getting all the “gold stars” have attempted to overturn such metrics and devise their own in the form of the H-index. But, getting the measure of metrics is difficult, especially in today’s climate.

In the current journal market and particularly given the economic climate, institutional purchasing is severely constrained by economic factors. For publishers outside the coteries of the three or four most well known, establishing prestige and validating the research articles within their pages is critical but difficult. In terms of survival of the fittest this applies equally to journals published and paid for by both traditional, open access and other models.

From the researchers’ perspective they want to publish in journals that will give their science and their team the most prominence and so give them more pulling power when it comes to filling in that next grant application or research assessments. Librarians, researchers including those in specialist niches have attempted to apply pressure on the way funding bodies, governments and companies place reliance on the standard metric.

Impact Factors are a double-edged sword, of course. If yours is high, you will be happy for yourself whether author or publisher. But if it’s low it is difficult to remedy that situation and without gaming the system there are few ways for important work from researchers that lack prominence and exist in niche areas to have a big impact.

Institutions have recognised the problems and the biases to some extent and have begun to evaluate a title’s significance beyond the conventional approach. Perhaps a system like PeerIndex might be extended to researchers, their papers and journals in some way. Indeed, some publishers have devised their own systems, e.g. Elsevier with Scopus and online scientific communities are beginning to find ways to rank research papers in a similar way to social bookmarking sites like and

For some institutions and countries following the Impact Factor is nevertheless obligatory. A recent paper by Larsen and von Ins investigates Impact Factor and others have looked at how publishing the right thing in the right place can affect careers (Segalla et al). There is a whole growth area in the research literature on assessing Impact Factor and other metrics. Instances where Impact Factor seems not to work particularly well have been reported recently. The Scientist explains how a single paper in a relatively small journal boosted that journal’s position in the league tables so that it overtook one of the most prominent and well-known journals, but only for a short period while that paper was topical and being widely cited.

There are many academics arguing for a change in the way papers and journals are assessed, among them Cameron Neylon who is hoping that the scientific community can build an alternative for the diverse measurement of research.

Unfortunately, there seems to be no simple answer to the problem of assessing research impact. Indeed, what is needed is some kind of ranking algorithm that can determine which of the various alternative impact factors systems would have the greatest…well…impact…

Research Blogging IconLarsen, P., & Ins, M. (2010). The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index Scientometrics, 84 (3), 575-603 DOI: 10.1007/s11192-010-0202-z


Research Blogging IconSegalla, M. (2008). Publishing in the right place or publishing the right thing: journal targeting and citations’ strategies for promotion and tenure committees European J. of International Management, 2 (2) DOI: 10.1504/EJIM.2008.017765

The perils of ranking

Today, I discovered I was listed on a Top 100 of UK Twitter users by The Independent newspaper based on the algorithm from PeerIndex. I was #47, since you ask, same as Armando Iannucci. It’s not that long since I made it on to a similar PeerIndex list published elsewhere. It’s all very flattering…

But, how can anyone boil down the worth of individuals, organisation, or other entities using half a dozen (almost random) measures? Using some secret algorithm is then used to put those entities, whether twitter users, websites in search engine results or schools and colleges into a ranked order that too many people set incredible store by thereafter.

It’s quite timely that one of my intellectual heroes, Malcolm Gladwell, writing in the most recent issue of the New Yorker, takes on this ranking issue and challenges the U.S. News & World Report’s college rankings, an ordered listing of universities. The list has apparently become the cornerstone of a rankings business that has outlasted its initial host. Gladwell suggests that such ranking is redundant in the modern world (and maybe always has been). There is no way a complex, multivariate phenomenon can be distilled down to a position in a single linear list. Even the presenters of the BBC’s Top Gear car show recognise that there are variables, such as body weight and wet roads, when ranking its “star in a reasonably priced car”.

I think this ranking fail applies equally as well to college honour rolls as it does to twitter users. They’re all complex entities after all and recognition of that is perhaps more flattering than being number 47 on a Top 100.

Related articles

  • How PeerIndex calculated the Twitter 100 (
  • Can you rank journalists by authority on Twitter? PeerIndex thinks so (
  • Top 50 fashion insiders on Twitter list topped by Daily Telegraph’s Hilary Alexander (
  • Politicians ‘have less authority’ than comedians on Twitter (
  • How to: Integrate PeerIndex into Twitter Profile Pages (

Heptastic science news

  • The full list: The Twitter 100 – Its 200 million users share 110 million messages a day – and if you don't know who rules the twittersphere, you don't understand the 21st-century world. This guide is a definitive who's who of the UK's tweet elite. Although for some reason they included me on the list (at #47, same as Armando Ianucci).
  • Why haven’t we cured cancer yet? – How many times have you been asked this question, how many times have you asked this question yourself? The answer boils down to the fact that cancer is not a single disease, it's hundreds of different diseases. Asking that question is like asking, "why haven't we cured viral infection?" or "why haven't we cured car accidents?". Even if we can cure one type effectively, there are hundreds of other types to deal with. Even the umbrella term "breast cancer" belies the fact that there are many different types of disease that lead to malignancy in breast tissue.
  • Recycling carbon – Technologies that can use carbon dioxide as a chemical feedstock are high on the agenda in the face of rising atmospheric levels of the greenhouse gas. A novel iron-based catalytic process studied using inductively coupled plasma (ICP) atomic emission spectrometry shows how carbon dioxide can be converted into the industrially useful formic acid at an 80% yield. Formic acid might also be used as a fuel cell fuel. The metal oxide by-product is readily reduced using glycerin derived from renewable sources releasing lactic acid, which could be used for biopolymer production.
  • Feverish research – There is neither vaccine nor cure for the Ebola and Marburg viruses, which cause fatal haemorrhagic fever in humans. However, a new NMR spectroscopic study by US researchers scientists has led to the discovery of a family of small molecules that apparently bind to the outer protein coat of the virus and halt its entry into human cells, so offering the possibility of an antiviral medication against the disease.
  • Structural biologists catch the pulse – Researchers have discovered that ultra-short X-ray pulses can produce exquisite measurements at the molecular level of biological objects by grabbing a "snapshot" just before the sample succumbs to radiation damage.
  • Enzymes against cocaine – The interaction of novel substrates for the enzyme butyrylcholinesterase (BChE) and mutants have been investigated using computational and correlation studies. The insights revealed could improve our understanding of how this enzyme, which metabolizes cocaine, might be modulated in drug therapy and the development of anti-addiction drugs.
  • Spectroscopy & Separation Science – We need the page to get 25 members so that we can switch to a nice short URL…please "like" the page.

From 雷竞技官网 Science Writer – seven science selections

Count envy, satisfaction and achievement

A friend of mine, András Paszternák, runs a thriving online nanotech community called Nanopaprika (he’s in Hungary, hence the name). He started the community (for which I am a Scientific & Advisory Board member) when he was doing his PhD and it has grown rapidly into one of the most targeted niches on the Web with lots of partnerships across the nano community in academia, industry and publishing. Nevertheless, András is impatient and dissatisfied with the pace of growth. He wants to reach out to the whole community a mere fraction of which (at 4000+ members from 70 countries) he has so far engaged.

“Nanopaprika is really not about numbers,” András told me. “I was also happy with first 10, 100, 1000 members. The story is about active members, about grad students who find a job, about industry people who share information about new products, about teachers who develop educational activities.”

Of course, each milestone is an achievement, but each milestone also means that the next milestone is further to go and there is always a sense of creeping dissatisfaction when one looks at the analytics. After all, there are other specialist online communities that are much bigger. Mendeley, ResearchGate, etc all have at least an order of magnitude greater membership, although it would be interesting to know how many are active too. Then there are the learned societies that have double that again or four, five, six times as many members. And if you’re talking true count envy, Facebook had 500 million members last year and has added another 100 million since.

I remember when I reached by first 100 Twitter followers, I was so pleased, but immediately I was thinking about the next big round number. 200, 300, 400 didn’t seem such a big deal. 500 was nice but reaching 1000 was better. I didn’t even notice when I got to 1000+100, then 1200, then 1300. 1500 wasn’t too bad, but I really wanted to get to 2000. Once I’d passed that point, 3000 wasn’t even in my sights, I wanted to get to 5000 as quickly as possible.

Growth seems to be linear, with the occasional spike when someone big retweets one of your links. The same happens with all communities, growth is rarely exponential unless your early adopters are incredibly well connected and successfully spread the word to their vast networks, at which point you can pass a tipping point and see runaway growth.

Smaller spikes to occur. They have happened several times since I joined Twitter in June 2007, but after the spikes, the growth rate seems to revert to the same angle of upward incline on the chart. 6000 came and went, as did 7000, then 8000. I currently have 9272 followers on Twitter and each day acquire maybe a dozen or so new followers. tells me that tomorrow I will have reached 9285. It also predicts that within three weeks I will have 9500. But, in six month’s time, or thereabouts I should have 10,000 followers.

So, what’s the next milestone after that? 20,000? 30,000? 100,000? At this rate that will take me 20 years. Is anyone still going to be tweeting five years, let alone 20? Surely, I’ll have found something better to do by then…

Each milestone is an achievement, but each milestone also means that the next milestone is further to go.

By the way, I do know that Twitter is not simply about accumulating numbers. I do know that a lot of the “people” that are following me are not actively using Twitter. Lots of them are automated “bot” and spammer accounts and I really ought to remove those and block them.

I also know that at the core there are a few dozen friends and contacts with whom I regularly trade tweets and links, with whom I direct message (DM) and with whom I even have conversations via email, messaging, SMS, the phone and even in the real world.

It’s entertaining, it’s a chance to share what you know and learn about what you don’t. It’s a chance, vaguely, to earn a crust. It’s not really about the numbers. While every follower matters, there are only a limited number that count.

An octet of science news

  • Perfect Perfume – a video for Valentine’s Day – A bit of fun for Valentine's Day as the team combinesto make our very own "perfect perfume"!
  • The lingering risk of thirdhand smoke – As Dubowski suggests, the notion of thirdhand smoke putatively being hazardous to health is controversial. Research in the late 2000s alluded to the potential problem of this form of pollution but ongoing public and academic scrutiny has not yet resolved the issue. Dubowski's work does provide a chemical basis for a possible risk but does not prove that the risk is substantial or otherwise. However, what is certain is that firsthand smoke is directly hazardous to the health of the smoker and recent evidence suggests that it could cause genetic damage almost the instant tobacco smoke is inhaled.
  • How marijuana works – Marijuana is the buds and leaves of the Cannabis sativa plant. This plant contains more than 400 chemicals, including delta-9-tetrahydrocannabinol (THC), the plant's main psychoactive chemical. THC is known to affect our brain's short-term memory. Additionally, marijuana affects motor coordination, increases your heart rate and raises levels of anxiety. Studies also show that marijuana contains cancer-causing chemicals typically associated with cigarettes. In this article, you will learn about marijuana, why this drug is so popular and what effects it has on your mind and body.
  • Tweeting the lab – The question of how to build an efficient and useable laboratory recording system is fundamentally one of how much information is necessary to record and how much of that can be recorded while bothering the researcher themselves as little as possible.
  • Kepler discovers a mini solar system – Using NASA’s orbiting Kepler observatory, astronomers have found a complete solar system of six planets orbiting a sun-like star… and it’s really weird: five of the six planets huddle closer to their star than Mercury does to the Sun!
  • Triclosan – should be used medically and banned in personal products – Triclosan is a really useful material with antibacterial and antiinflammatory properties but it should be banned from use in personal care ingredients.
  • Sciencebase Presents… – Nerdy, geeky, dorky science videos. Classic stuff, will be loved by nerds, geeks and dorks everywhere!
  • JournalTOCs – 15,204 journals (including 1,676 Open Access journals) collected from 706 publishers. Very easy to browse and to create custom feeds for specific subject areas e.g. Just swap out "chemistry" for your chosen subject.

From 雷竞技官网 Science Writer – Eight science picks

Breaking down technology transfer barriers

Breaking down the technical and legal barriers are essential if technology transfer from academia to industry is to be done efficiently and effectively, according to researchers in Spain. Antonio Hidalgo, Professor of Technology Strategy at the Technical University of Madrid and José Albors, Professor of Business Administration at the Technical University of Valencia explain that publicity and prestige are not enough to allow the smooth transfer of knowledge from universities to the commercial sector.

There have, of course, been many public and prestigious success stories many of which have emerged from the development of so-called science parks that facilitate collaboration between university campuses and tenant companies. Science ‘parks’ in Silicon Valley (California), Research Triangle Park (North Carolina) and Silicon Fen (Cambridge) have been incredibly successful although there have probably been as many failed start-ups as successful ones, if not more. Nevertheless, those science parks have served as a useful model for similar sites across the globe, where they raise the technological sophistication of local companies, promote industrial R&D, boost foreign investment and generally create knowledge-intensive economy in the region rather than the more conventional powerhouses of the labour-intensive industries of traditional industrial sites.

The success of science parks can be attributed to various competitive advantages in which knowledge transfer is facilitated by the large reserves of technical knowledge and skills, the appropriate infrastructure, the attractiveness to venture capitalists, the open communications channels between academics and the business people (who are often one and the same) and not least the excellence of the local educational facilities.

Many researchers have tried to quantify the successes and failures of technology transfer previously, but Hidalgo and Albors are developing a new model that reveals how issues of intellectual property rights and technical constraints are often to blame for failure and when these barriers are overcome success often flows without resistance. Their model is built on a survey of those involved in more than 2000 projects with 880 led by universities and more than 1300 led by business. What emerged is perhaps obvious at one level in that universities and firms have different objectives and an endpoint for technology transfer is only defined clearly when those objectives are themselves defined.

They make various suggestions based on their studies that on the university side of the equation, recommend that improving access and reducing bureaucracy are needed, there also needs to be professional management and organisational support. On the industrial side, companies need to be able to identify the value added by the university in a partnership and at the same time spread the commercial risk.

The researchers point out that despite public and prestigious efforts at the national and international level, public policies have not promoted the necessary cooperative aspects of research projects or the completion of technology transfer and this is, according to many of their survey respondents , the main cause of the failure to surmount the technical and legal barriers. But, there are so many long-term benefits to success in technology transfer, profitable IPRs for both universities and companies. Increased prestige of the university, company growth, and various positive impacts on customers, employment, and the bottom line in terms of financial returns on both sides of the coin.

Research Blogging Icon Antonio Hidalgo, & José Albors (2011). University-industry technology transfer models: an empirical analysis Int. J. Innovation and Learning, 9 (2), 204-223

The trouble with encryption

Lots of us encrypt files using the likes of AxCrypt and TrueCrypt. If there’s a risk of losing a device carrying sensitive information such as contacts, email, bank statements, invoices etc, then it is worth using such a tool. The ease with which a file, folder or even complete hard drive or USB device can be encrypted always beggars the question as to why more people, and particularly government and other official departments do not use it as a matter of course.

Encryption is the only sure way to keep prying eyes from reading your private data. Unfortunately, encryption comes at a price: an encrypted file is so obviously hiding something that law enforcement, thieves or anyone else for that matter can deduce almost immediately that a file that has been encrypted must be important in some way and so might focus cracking efforts on that file or drive because extracting the raw data could be so profitable.

Now, many experts will tell you that “security through obscurity” is not a sensible way to protect one’s data. However, the honey pot of encrypted files is too sweet for many with malicious intent that obscurity might be the only way to approach securing your data. Such an option might be the best choice in a totalitarian regime where the authorities might use torture or other brutal means to extract a password for an obviously encrypted file from an activist citizen during an uprising, for instance.

George Weir and Michael Morran of the Department of Computer and Information Sciences, at the University of Strathclyde, UK, explain that textual steganography is a means of concealing an encoded message within text. “The appeal in such a system is its potential for hiding the fact that encoding is taking place,” they explain. “The failure to hide the presence of encoding is a form of information leakage and an inherent risk since it suggests that there is important or valuable information in transit.” After all, if an uprising in a non-democratic regime fails and the secret police come knocking on your door because you were so vocal on Twitter, Facebook and out on the streets during the revolution, then a stack of encrypted files on your computer are obvious bait for their efforts.

“To an adversary, or even the authorities, the presence of encrypted traffic between two individuals may be enough to arouse suspicion. Although the information may be securely encrypted, a third party may be motivated to intercept the information and either tamper with the message or prevent it ever reaching its destination. This risk will remain as long as encryption is adopted as the sole solution to information security needs,” the researchers say.

There has been much work done in the development of steganography in the digital age in which a multimedia file, a photo or music track, for instance, is used as a vault into which hidden information might be embedded. However, activist sharing photos and music files are just as likely to arouse suspicion as their sharing obviously encrypted documents. The Strathclyde team is now developing and testing an approach to steganography that uses natural language to embed information in an innocuous text document that to all intents and purposes appears to be nothing more than an innocuous document.

Their approach is to utilize synonyms and deliberate errors to “encrypt” data within a block of text. By ensuring the changes are sparse enough within a given block of text the encryption becomes essentially invisible without it arousing suspicion. It’s almost akin to the kind of mock spy talk we hear in popular culture – Roberto will bring the blue flamingo to Moscow once the spare wheel is in the nest” – that kind of thing, but operating on a much more sophisticated and automated level.

“The main goals of the final system lay in the ability to hide information in text in such a way that hidden content is undetectable,” the team says. Their tests demonstrate that it is possible to recover complete messages from the textual steganography. However, the prototype while successful generates text that in 40-50% of the encrypted messages might appear odd to a human, as opposed to computer, reader. But, the system is manually configurable so that the person encrypting the information can tweak the substitutions and errors so that an entirely plausible text results within which the hidden message is invisibly embedded.

Research Blogging IconGeorge R.S. Weir, & Michael Morran (2010). Hiding the hidden message: approaches to textual steganography Int. J. Electronic Security and Digital Forensics, 3 (3), 223-233