I doubted between using [[http://www.cplusplus.com/reference/thread/this_thread/sleep_for/ | ''sleep_for'' c++ function]] or just comment the line on ''src/lib/extract/_baseimpl.cc''. I did the latter and apparently it worked:
+
I doubted between using [[http://www.cplusplus.com/reference/thread/this_thread/sleep_for/ | sleep_for c++ function]] or just comment the line on ''src/lib/extract/_baseimpl.cc''. I did the latter and apparently it worked:
<code>
<code>
// Commented by JGF for FRED 12/oct/16
// Commented by JGF for FRED 12/oct/16
Ligne 188:
Ligne 188:
</code>
</code>
-
''GSoap'' is already installed in the ''ext'' directory:
+
//GSoap// is already installed in the ''ext'' directory:
<code>
<code>
$ ls ext/
$ ls ext/
Ligne 202:
Ligne 202:
</code>
</code>
-
I got an ''aclocal-1.10'' error
+
I got an //aclocal-1.10// error
<code>
<code>
cd . && /bin/bash /opt/FRED/BoxerServer/candc/ext/gsoap-2.8/missing --run aclocal-1.10
cd . && /bin/bash /opt/FRED/BoxerServer/candc/ext/gsoap-2.8/missing --run aclocal-1.10
Ligne 308:
Ligne 308:
</code>
</code>
===== Boxer and statistical models =====
===== Boxer and statistical models =====
-
Following [http://web.archive.org/web/20160313031620/http://svn.ask.it.usyd.edu.au/trac/candc/wiki/Installation the documentation (Step 6)], we just go to the ''candc'' directory and make...
+
Following [[http://web.archive.org/web/20160313031620/http://svn.ask.it.usyd.edu.au/trac/candc/wiki/Installation | the documentation (Step 6)]], we just go to the //candc// directory and make...
+
<code>
$ cd /opt/FRED/BoxerServer/cand
$ cd /opt/FRED/BoxerServer/cand
$ make bin/boxer
$ make bin/boxer
Ligne 329:
Ligne 330:
% Autoloader: iteration 2 resolved 21 predicates and loaded 28 files in 0,074 seconds. Restarting ...
% Autoloader: iteration 2 resolved 21 predicates and loaded 28 files in 0,074 seconds. Restarting ...
% Autoloader: loaded 33 files in 3 iterations in 0,205 seconds
% Autoloader: loaded 33 files in 3 iterations in 0,205 seconds
+
</code>
Finally, we just check that the statistical models are there:
Finally, we just check that the statistical models are there:
FRED works only with Core NLP is 3.4.1, so we should go to [https://stanfordnlp.github.io/CoreNLP/history.html Stanford Core NLP release history page] in order to downloado this specific version.
+
FRED works only with Core NLP is 3.4.1, so we should go to [[https://stanfordnlp.github.io/CoreNLP/history.html | Stanford Core NLP release history page]] in order to download this specific version.
Now we follow the "[https://stanfordnlp.github.io/CoreNLP/cmdline.html Using Stanford CoreNLP from the command line]" documentation page. So we go to Core NLP root directory and run...
+
Now we follow the "[[https://stanfordnlp.github.io/CoreNLP/cmdline.html | Using Stanford CoreNLP from the command line]]" documentation page. So we go to Core NLP root directory and run...
According to the documentation, this command process a file called ''input.txt'' and produces an ''input.txt.xml'' file with POS, named entites and lemma annotation. There's some configuration to do (classpath, properties file) but we will wait until we know how exactly FRED uses Core NLP for further configuration.
According to the documentation, this command process a file called ''input.txt'' and produces an ''input.txt.xml'' file with POS, named entites and lemma annotation. There's some configuration to do (classpath, properties file) but we will wait until we know how exactly FRED uses Core NLP for further configuration.
-
===Python interface to Stanford Core NLP tools v3.4.1===
+
==== Python interface to Stanford Core NLP tools v3.4.1 ====
-
So we go back to the /opt/FRED/externals directory and clone [https://github.com/dasmith/stanford-corenlp-python.git Stanford Core NLP Python wrapper]
+
So we go back to the /opt/FRED/externals directory and clone [[https://github.com/dasmith/stanford-corenlp-python.git | Stanford Core NLP Python wrapper]]
We check python version and install pip and the wrapper dependencies:
We check python version and install pip and the wrapper dependencies:
+
<code>
$ python --version
$ python --version
Python 2.7.6
Python 2.7.6
$ sudo apt-get install python-pip
$ sudo apt-get install python-pip
$ sudo pip install pexpect unidecode
$ sudo pip install pexpect unidecode
+
</code>
-
The we follow [https://github.com/dasmith/stanford-corenlp-python/blob/master/README.md the python wrapper documentation], which specifies that Stanford Core NLP must be a child directory of the python wrapper, so we move our Core NLP directory inside the wrapper's directory:
+
The we follow [[https://github.com/dasmith/stanford-corenlp-python/blob/master/README.md | the python wrapper documentation]], which specifies that Stanford Core NLP must be a child directory of the python wrapper, so we move our Core NLP directory inside the wrapper's directory:
So we must install [http://www.nltk.org/install.html NLTK] because it looks like a dependecy for the wrapper:
+
So we must install [[http://www.nltk.org/install.html | NLTK]] because it looks like a dependecy for the wrapper:
+
<code>
$ sudo pip install -U nltk
$ sudo pip install -U nltk
$ python
$ python
>>> import nltk
>>> import nltk
$ sudo python -m nltk.downloader -d /usr/local/share/nltk_data all
$ sudo python -m nltk.downloader -d /usr/local/share/nltk_data all
+
</code>
We test again
We test again
+
<code>
$ python client.py
$ python client.py
Traceback (most recent call last):
Traceback (most recent call last):
File "client.py", line 18, in <module>
File "client.py", line 18, in <module>
tree = Tree.parse(result['sentences'][0]['parsetree'])
tree = Tree.parse(result['sentences'][0]['parsetree'])
+
</code>
We still have an error, but it doesn't look bad, so we're going to ignore it and move on.
We still have an error, but it doesn't look bad, so we're going to ignore it and move on.
-
===Babelfly===
+
==== Babelfly ====
-
I can't find any reference to entity disambiguation with Babelfly in FRED code, so I wont proceed to the installation from the [http://babelfy.org/download Babelfly download page]. Maybe it's a TODO to replace the Tagme calls (which are still inside FRED code) for Babelfly calls.
+
I can't find any reference to entity disambiguation with Babelfly in FRED code, so I wont proceed to the installation from the [[http://babelfy.org/download Babelfly | download page]]. Maybe it's a TODO to replace the Tagme calls (which are still inside FRED code) for Babelfly calls.