[phpBB Debug] PHP Warning: in file [ROOT]/phpbb/session.php on line 583: sizeof(): Parameter must be an array or an object that implements Countable
[phpBB Debug] PHP Warning: in file [ROOT]/phpbb/session.php on line 639: sizeof(): Parameter must be an array or an object that implements Countable
Gephi forumsPlease post new questions on facebook group too (https://www.facebook.com/groups/gephi) 2014-08-20T09:22:35+01:00 https://forum-gephi.org/app.php/feed/topic/3549 2014-08-20T09:22:35+01:002014-08-20T09:22:35+01:00 https://forum-gephi.org/viewtopic.php?t=3549&p=10643#p10643 <![CDATA[Re: Trusting Gephi results]]>
Thanks for explanation. Nevertheless I think it is better to get higher modularity.

Just one short question more:

... haha, UCINET uses the source code of Pajek, so it's a clear tautology

What does it mean, as far as I know source code of Pajek is not available for free,
so I doubt that Ucinet was able to get the code.
Do you have another information?

Thanks.

Statistics:Posted by xptrxptr — 20 Aug 2014 09:22


]]>
2014-08-20T08:19:16+01:002014-08-20T08:19:16+01:00 https://forum-gephi.org/viewtopic.php?t=3549&p=10639#p10639 <![CDATA[Re: Trusting Gephi results]]>
This is a hard question and I'm only aware of a few work-in-progress research papers that truly compare sna tools.

1. Closeness centrality results varies among software (not only Gephi, but also UCINET, Pajek, ORA, igraph...) depending if you have disconnected componenents and self-loops.

2. A lower modularity doesn't mean worse results. Please read carefully the original paper and how to interpret the modularity value http://arxiv.org/abs/0803.0476

3. haha, UCINET uses the source code of Pajek, so it's a clear tautology :)
He should also check with Cytoscape, the sna package of R, etc...he will get different results. Even when metrics are well defined, there is still room for interpretation when they are implemented. And all software contain bugs. So it's a bias that should be taken into consideration when drawing conclusions, like any other bias in methodology.

Anna Kharitonova has studied the results of our implementations with a set of small graphs against the statistical formula and original implementation of algorithms, see:
https://docs.google.com/file/d/0BznZHkr ... FReU0/edit
source code: https://github.com/annaalkh/gephi/tree/test-feature

To sum up, only the HITS algorithm is not reliable.

Statistics:Posted by admin — 20 Aug 2014 08:19


]]>
2014-08-18T07:49:31+01:002014-08-18T07:49:31+01:00 https://forum-gephi.org/viewtopic.php?t=3549&p=10635#p10635 <![CDATA[Trusting Gephi results]]>
Gephi is a wonderful tool for visualizing networks.
Probably better than any other software.
Thank you for doing it.
The only problem is speed with larger networks
but that is another story.

But when doing analysis there are problems:

1. Recently there was a discussion on SOCNET mailing list
that Closeness centrality results computed by Gephi are wrong.

2. For almost all networks when computing modularity I get
worse results (lower modularity) when using Gephi than when using Pajek
(e.g. https://forum.gephi.org/viewtopic.php?f=29&t=3543

3. Recently I submitted a paper to a journal and the reviewer was suspicious
that my results are wrong. So I sent him my network and he checked the results using Ucinet.
Results were different. Then we checked it also with Pajek and get the same results as with Ucinet.

I see a major problem with Gephi - can we trust the results?
My suggestion: It would be good if somebody would check procedures,
make necessary bug fixes and give a kind of 'certificate' that the
algorithm used in Gephi is correct.
In my opinion Gephi community would benefit from such

At te moment I solve the problem by always using alternative software
(Ucinet and/or Pajek) to check the results ;(

Statistics:Posted by xptrxptr — 18 Aug 2014 07:49


]]>