Now we follow the "[https://stanfordnlp.github.io/CoreNLP/cmdline.html Using Stanford CoreNLP from the command line]" documentation page. So we go to Core NLP root directory and run...
+
Now we follow the "[[https://stanfordnlp.github.io/CoreNLP/cmdline.html | Using Stanford CoreNLP from the command line]]" documentation page. So we go to Core NLP root directory and run...
==== Python interface to Stanford Core NLP tools v3.4.1 ====
==== Python interface to Stanford Core NLP tools v3.4.1 ====
-
So we go back to the /opt/FRED/externals directory and clone [https://github.com/dasmith/stanford-corenlp-python.git Stanford Core NLP Python wrapper]
+
So we go back to the /opt/FRED/externals directory and clone [[https://github.com/dasmith/stanford-corenlp-python.git | Stanford Core NLP Python wrapper]]
We check python version and install pip and the wrapper dependencies:
We check python version and install pip and the wrapper dependencies:
+
<code>
$ python --version
$ python --version
Python 2.7.6
Python 2.7.6
$ sudo apt-get install python-pip
$ sudo apt-get install python-pip
$ sudo pip install pexpect unidecode
$ sudo pip install pexpect unidecode
+
</code>
-
The we follow [https://github.com/dasmith/stanford-corenlp-python/blob/master/README.md the python wrapper documentation], which specifies that Stanford Core NLP must be a child directory of the python wrapper, so we move our Core NLP directory inside the wrapper's directory:
+
The we follow [[https://github.com/dasmith/stanford-corenlp-python/blob/master/README.md | the python wrapper documentation]], which specifies that Stanford Core NLP must be a child directory of the python wrapper, so we move our Core NLP directory inside the wrapper's directory:
So we must install [http://www.nltk.org/install.html NLTK] because it looks like a dependecy for the wrapper:
+
So we must install [[http://www.nltk.org/install.html | NLTK]] because it looks like a dependecy for the wrapper:
+
<code>
$ sudo pip install -U nltk
$ sudo pip install -U nltk
$ python
$ python
>>> import nltk
>>> import nltk
$ sudo python -m nltk.downloader -d /usr/local/share/nltk_data all
$ sudo python -m nltk.downloader -d /usr/local/share/nltk_data all
+
</code>
We test again
We test again
+
<code>
$ python client.py
$ python client.py
Traceback (most recent call last):
Traceback (most recent call last):
File "client.py", line 18, in <module>
File "client.py", line 18, in <module>
tree = Tree.parse(result['sentences'][0]['parsetree'])
tree = Tree.parse(result['sentences'][0]['parsetree'])
+
</code>
We still have an error, but it doesn't look bad, so we're going to ignore it and move on.
We still have an error, but it doesn't look bad, so we're going to ignore it and move on.
==== Babelfly ====
==== Babelfly ====
-
I can't find any reference to entity disambiguation with Babelfly in FRED code, so I wont proceed to the installation from the [http://babelfy.org/download Babelfly download page]. Maybe it's a TODO to replace the Tagme calls (which are still inside FRED code) for Babelfly calls.
+
I can't find any reference to entity disambiguation with Babelfly in FRED code, so I wont proceed to the installation from the [[http://babelfy.org/download Babelfly | download page]]. Maybe it's a TODO to replace the Tagme calls (which are still inside FRED code) for Babelfly calls.