17/2/25

ΟΝ ΤΗΕ FWCI INDEX

 As if a zillion citation indices were not enough, I recently became aware of the so-called FWCI (from Field-Weighted Citation Impact) index, devised and maintained by Elsevier. See HERE for some information on this index. 

How did I learn about FCWI? (I admit I did not know anything about it). A colleague posted the following table on Facebook, which was reproduced from another colleague's post on LinkedIn (note that I have hibernated my LinkedIn account since about a year ago): 

(big apologies for the inconsistency in formatting)





The table lists the top 50, according to their FWCI index, academics in the field of "shipping and port management" (SPM). 

First of all, big congrats are due to all these people for making that list. I note that I am also included in it, in the No. 9 position, which is probably a surprise as I have only published very few port management papers. I have also published some papers that could, conceivably, fall under the general rubric of shipping management-  if one finds a way to explicitly define the field (more on this later). I also note that most of my own citations are in the vehicle routing area, with little or no connection to shipping and port management. 

Needless to say, there has been a big publicity splash on social media about this table, especially by people who are near the very top of the table. All of these people feel happy about their position in that table. 

As for me, I have some questions and comments about the table:

1. How was this table produced? I entered the FWCI site and I found no way to enter  "shipping and port management" as a filter or criterion or discipline, to see who is included in the area. A colleague who tried doing this by hand, produced another table. His question on how the table was produced went unanswered. 

2. A more basic question is, how is the SPM field defined? The Stanford/Elsevier global scientists database lists more than 160 scientific subfields (as they are known), and SPM is not one of them. Is a paper on ship weather routing an SPM paper? Probably. How about a paper on reducing maritime emissions, or a paper on finding the best risk control options to mitigate oil pollution? Or a paper on port regulatory policy? Or a paper in intermodal logistics? Here it is not so clear. So there is an identity problem to start with, and two different people may have different opinions on this subject. 

3. There is no direct mapping between SPM and any of the 160+ Stanford/Elsevier designated subfields. We can not say, for instance, that SPM is a subset of the "Logistics and Transportation" (LT) subfield, even though many researchers of the above SPM table have LT as their primary Stanford/Elsevier subfield. There may be other researchers who have other primary subfields in the Stanford/Elsevier database (for instance Operations Research, Economics, Business and Management, or other) and who may claim to be into SPM as well. 

4. But even if we find a way to explicitly define SPM, the next question is, who qualifies as an SPM researcher? One who has all of his/her papers into SPM? Some of his/her papers? Or at least one of his/her papers? And who will make that determination? The researcher himself/herself, or someone else? (and who?) The Stanford/Elsevier global scientists database makes this determination automatically, using an algorithm that looks at the overall publication picture and assigns to each scientist in the database exactly one primary field and exactly one primary subfield.  But there is nothing equivalent for SPM.

5. How would these results differ if instead of SPM we had "shipping and port economics", "shipping and port logistics",  "maritime and port economics and logistics", or something related? 

6. There is more. I can identify several other people who have published some SPM papers, and thus could be in the above table, but were, for some reason, omitted. These include (order is random) Shuaian Wang (FWCI= 2.1), Qiang Meng (FWCI= 1.94), Michael Bell (FWCI= 1.58), Jan Hoffmann (FWCI= 4.40), Christos Kontovas (FWCI= 3.63), and Thalis Zis (FWCI= 2.26). And this is a non-exhaustive list. If these people were included in the table, some other people would have to be removed. The omission of these people brings back question No. 1. 

6. I see that my own FWCI (and I suspect everybody else's) refers to the time interval 2013-2024. Is that reasonable? For me, in that time interval I had 84 out of a total of 180 Scopus-listed publications. Why are the rest of these publications not relevant? 

7. What does Elsevier itself say about FWCI? It clearly states that this is a paper-level index and NOT a researcher-level index.  For more on this see HEREIt also has this caveat: "Highly cited publications for entities with a small scholarly output may skew the FWCI. This metric should be used with care when assessing performance." This means that it should not be used to compare individual researchers with one another. 

8. See also HERE for an article by the University of Liverpool on the pitfalls of FWCI. According to it, and among other things, a list of recent Nobel prize laureates has a subpar FWCI ("We ran all Nobel Prize Winners between 2018 and 2020 (a total of 25). We found that 10 (or 40%) had 2015-2020 SciVal scores < 2)

Based on the above, and even though I am happy to be included in this table, I would be a bit cautious on using FWCI, either for SPM, or for other scientific disciplines, as a criterion for individual researcher performance. 

See HERE for several prior citations blogs. 

PS (ADDED ONE DAY LATER, 18.02.2025)

My blog above found its way into LinkedIn (via a colleague). I could not see the discussion since I am not on LinkedIn, but I received an email by another colleague, who is at the very top of the table,  in which he partially answered my question No. 1: He clarified that, among other things, those who compiled the table- whom he did not name- intended to list academics who are members of IAME (the International Association of Maritime Economists) and have FWCI>1. He also mentioned that the compilers of the table did not intend to use it to make comparisons among researchers. 

I am not sure if this clarification and caveat were also shared in public or were only just for me.  

I note however that some people under No. 6 are IAME members but were omitted from the table nonetheless (reason: unknown). I do not think this may be fair to them, in spite of FWCI being a dubious index. Conversely, it is not clear if all people in the table are IAME members. But surely some are, including the President of IAME and some IAME Council members (I am one of them). With all the deficiencies of FWCI, I am not sure their being listed in this table is fair to them either. 

Has IAME been consulted in compiling this table? I doubt it.

All in all, and in addition to all the deficiencies of FWCI as per above, I think it is fair to say that the information in the above table is not accurate and can be misleading, particularly so long as it continues being circulated in social media for the purpose of highlighting, comparing and benchmarking individual academic performance and impact.