equipes:rcln:ancien_wiki:projets:semeval:notes_conference_2015

Différences

Ci-dessous, les différences entre deux révisions de la page.

Lien vers cette vue comparative

Les deux révisions précédentes Révision précédente
Prochaine révision
Révision précédente
equipes:rcln:ancien_wiki:projets:semeval:notes_conference_2015 [2019/04/08 15:24]
rosse [Marco Baroni keynote conference on distributional semantic models]
equipes:rcln:ancien_wiki:projets:semeval:notes_conference_2015 [2020/11/23 18:41] (Version actuelle)
garciaflores ↷ Page déplacée de equipes:rcln:projets:semeval:notes_conference_2015 à equipes:rcln:ancien_wiki:projets:semeval:notes_conference_2015
Ligne 36: Ligne 36:
     *  As far as I understand, he's trying to train an MMSkipGram model in order to learn unkown words and associate them to images, inspired by the baby language learning process... looks intresting, but...     *  As far as I understand, he's trying to train an MMSkipGram model in order to learn unkown words and associate them to images, inspired by the baby language learning process... looks intresting, but...
     *  <cite>Models of language acquisition</cite>     *  <cite>Models of language acquisition</cite>
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval001.pdf|SemEval-2015 Task 1: Paraphrase and Semantic Similarity in Twitter (PIT)]] ====+==== SemEval-2015 Task 1: Paraphrase and Semantic Similarity in Twitter (PIT) ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval001.pdf]]
     *  A task worth to participate in... two subtasks: paraphrase identification (bynary) and semantic similarity between tweets. In contrast to STS, you have much more spell variations and much more //street// language. Very interesting in deed, specially if we are thinking about processing @menosdias tweets.     *  A task worth to participate in... two subtasks: paraphrase identification (bynary) and semantic similarity between tweets. In contrast to STS, you have much more spell variations and much more //street// language. Very interesting in deed, specially if we are thinking about processing @menosdias tweets.
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval002.pdf|MITRE: Seven Systems for Semantic Similarity in Tweets]] ====+==== MITRE: Seven Systems for Semantic Similarity in Tweets ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval002.pdf|]]
   -  352 features combined with logistic regression   -  352 features combined with logistic regression
     *  a lot of metrics, machine translation, biological metrics, etc!     *  a lot of metrics, machine translation, biological metrics, etc!
Ligne 53: Ligne 55:
   *  Word embeddings are the new buzz word   *  Word embeddings are the new buzz word
  
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval046.pdf|ExB Themis: Extensive Feature Extraction from Word Alignments for Semantic Textual Similarity]] ====+==== ExB Themis: Extensive Feature Extraction from Word Alignments for Semantic Textual Similarity ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval046.pdf]]
   *  They were the best in spanish and one of the best in english   *  They were the best in spanish and one of the best in english
   *  Cherry-picking the best approaches   *  Cherry-picking the best approaches
Ligne 72: Ligne 75:
     *  Eneko asks if there were ablation test in order to estimate which features were better: they didn't.     *  Eneko asks if there were ablation test in order to estimate which features were better: they didn't.
     *  "If two sentences have similar NE they should be aligned first"     *  "If two sentences have similar NE they should be aligned first"
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval047.pdf|SemEval-2015 Task 3: Answer Selection in Community Question Answering]] ====+==== SemEval-2015 Task 3: Answer Selection in Community Question Answering ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval047.pdf|]]
   *  To guess the right answer to a question from several options. Two corpus: Quatar expats (English) and religious questions (Arabic)   *  To guess the right answer to a question from several options. Two corpus: Quatar expats (English) and religious questions (Arabic)
  
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval048.pdf|VectorSLU: A Continuous Word Vector Approach to Answer Selection in Community Question Answering Systems]] ====+==== VectorSLU: A Continuous Word Vector Approach to Answer Selection in Community Question Answering Systems ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval048.pdf|]]
   *  UIMA pipeline!   *  UIMA pipeline!
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval049.pdf|SemEval-2015 Task 13: Multilingual All-Words Sense Disambiguation and Entity Linking]] ====+==== SemEval-2015 Task 13: Multilingual All-Words Sense Disambiguation and Entity Linking ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval049.pdf|]]
   -  Babelnet, Tokenized, POS tagged documents in four languages   -  Babelnet, Tokenized, POS tagged documents in four languages
   -  Example: the concept of //medicine// as a drug (variating specification according to the source: Wikipedia, wordnet, etc.)   -  Example: the concept of //medicine// as a drug (variating specification according to the source: Wikipedia, wordnet, etc.)
Ligne 86: Ligne 92:
   -  The winner approach: content words tagged by exploiting their translations in other languages   -  The winner approach: content words tagged by exploiting their translations in other languages
     *  The winer approach comes from french Lab LIMSI, and in particular from the charming Marianita Apidianaki     *  The winer approach comes from french Lab LIMSI, and in particular from the charming Marianita Apidianaki
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval050.pdf|LIMSI: Translations as Source of Indirect Supervision for Multilingual All-Words Sense Disambiguation and Entity Linking]] ====+==== LIMSI: Translations as Source of Indirect Supervision for Multilingual All-Words Sense Disambiguation and Entity Linking ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval050.pdf|]]
   -  LIMSI system exploitts the parallelism of the miltilingual test data   -  LIMSI system exploitts the parallelism of the miltilingual test data
   -  assumption of sense correspondence between a word and its translation in context (Diab and Resnik, 2002)   -  assumption of sense correspondence between a word and its translation in context (Diab and Resnik, 2002)
Ligne 105: Ligne 112:
   -  Perspectives: experiment with alignments provided by MT systems, train a WSD systems on data annotated by the alignment-based method   -  Perspectives: experiment with alignments provided by MT systems, train a WSD systems on data annotated by the alignment-based method
     *  check out the METEOR-WSD and RATATOUILLE metrics from the WMT shared task     *  check out the METEOR-WSD and RATATOUILLE metrics from the WMT shared task
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval051.pdf|SemEval-2015 Task 14: Analysis of Clinical Text]] ====+==== SemEval-2015 Task 14: Analysis of Clinical Text ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval051.pdf|]]
   *  Corpus: 100 annotated notes (109K words)   *  Corpus: 100 annotated notes (109K words)
   *  400K unnanotated notes   *  400K unnanotated notes
Ligne 117: Ligne 125:
   * # CRF-based span recognition, bag of words, bigrams, POS, chunks, dependency, specialized lexicons, trigger temrs, distance to disporder spans, dependency parse information   * # CRF-based span recognition, bag of words, bigrams, POS, chunks, dependency, specialized lexicons, trigger temrs, distance to disporder spans, dependency parse information
   *  [[http://share.healthnlp.org|Link to the task]]   *  [[http://share.healthnlp.org|Link to the task]]
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval052.pdf|UTH-CCB: The Participation of the SemEval 2015 Challenge – Task 14]] ====+==== UTH-CCB: The Participation of the SemEval 2015 Challenge – Task 14 ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval052.pdf|]]
   -  Disorder Entity recognition   -  Disorder Entity recognition
     *  Vector space model based, word-embeddings (MIMIC II corpus), CRF, SSVM and Meta Map     *  Vector space model based, word-embeddings (MIMIC II corpus), CRF, SSVM and Meta Map
   -  Disorder slot filling   -  Disorder slot filling
     *  SVM, ngream ffeatures, lexicon features, dependency relation features     *  SVM, ngream ffeatures, lexicon features, dependency relation features
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval053.pdf|SemEval-2015 Task 15: A CPA dictionary-entry-building task]] ====+==== SemEval-2015 Task 15: A CPA dictionary-entry-building task ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval053.pdf|]]
   -  CPA: Corpus pattern analysis.   -  CPA: Corpus pattern analysis.
     *  corpus driven technique for mapping meaning onto words in text     *  corpus driven technique for mapping meaning onto words in text
Ligne 134: Ligne 144:
   -  ACL-2015 tutorial: Patterns for semantic processing   -  ACL-2015 tutorial: Patterns for semantic processing
  
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval054.pdf|BLCUNLP: Corpus Pattern Analysis for Verbs Based on Dependency Chain]] ====+==== BLCUNLP: Corpus Pattern Analysis for Verbs Based on Dependency Chain ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval054.pdf|]]
   *  RAS (rien à signaler)   *  RAS (rien à signaler)
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval077.pdf|SemEval-2015 Task 9: CLIPEval Implicit Polarity of Events]] ====+==== SemEval-2015 Task 9: CLIPEval Implicit Polarity of Events ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval077.pdf|]]
   *  RAS (rien à signaler)   *  RAS (rien à signaler)
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval078.pdf|SemEval-2015 Task 10: Sentiment Analysis in Twitter]] ====+==== SemEval-2015 Task 10: Sentiment Analysis in Twitter ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval078.pdf|]]
   -  Subtasks:   -  Subtasks:
     *  Phrase level sentiment     *  Phrase level sentiment
     *  MEssage level sentiment     *  MEssage level sentiment
     *  Topic level sentiment     *  Topic level sentiment
-[[...]]+%%[%%...%%]%%
   *  I had to go to the toilet, so I lost most of the presentation, which looked very good: Trento was the winer in the subtask A (phrases) using deep convolutional NN with aditional input for phrases; the second use message level sentiment + character n-grams, the third used model iteration (tetrai)   *  I had to go to the toilet, so I lost most of the presentation, which looked very good: Trento was the winer in the subtask A (phrases) using deep convolutional NN with aditional input for phrases; the second use message level sentiment + character n-grams, the third used model iteration (tetrai)
   *  For subtask B (message polarity, the most popular task of Semeaval) the winner put together four top performer classifiers from previous editions of the task; the second used deep convutional NN and the third used logistic regression with special wieghting for positives and negatives   *  For subtask B (message polarity, the most popular task of Semeaval) the winner put together four top performer classifiers from previous editions of the task; the second used deep convutional NN and the third used logistic regression with special wieghting for positives and negatives
Ligne 158: Ligne 171:
   -  <cite>rotten is less positive than; #hapiness is ore positive than</cite>   -  <cite>rotten is less positive than; #hapiness is ore positive than</cite>
  
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval079.pdf|UNITN: Training Deep Convolutional Neural Network for Twitter Sentiment Classification]] ====+==== UNITN: Training Deep Convolutional Neural Network for Twitter Sentiment Classification ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval079.pdf|]]
   -  Very intresting an didactic paper on deep learning to nlp   -  Very intresting an didactic paper on deep learning to nlp
   -  The key to success is the initialization of the NN   -  The key to success is the initialization of the NN
Ligne 181: Ligne 195:
     *  number of feature maps 300     *  number of feature maps 300
   -  Importance of pre-training: Three different experiments: random, unsupervised, distant   -  Importance of pre-training: Three different experiments: random, unsupervised, distant
-  -  <cite>careful weights +  -  careful weights 
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval080.pdf|SemEval-2015 Task 11: Sentiment Analysis of Figurative Language in Twitter]] ====+==== SemEval-2015 Task 11: Sentiment Analysis of Figurative Language in Twitter ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval080.pdf|]]
   *  Use CrowdFlower for training/annotation   *  Use CrowdFlower for training/annotation
   * 8000 training tweets   * 8000 training tweets
   *  4000 testing tweets   *  4000 testing tweets
   *  output categories: sarcasm, irony, metaphor and others   *  output categories: sarcasm, irony, metaphor and others
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval081.pdf|CLaC-SentiPipe: SemEval2015 Subtasks 10 B,E, and Task 11]] ====+==== CLaC-SentiPipe: SemEval2015 Subtasks 10 B,E, and Task 11 ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval081.pdf|]]
   *  Negation and modality   *  Negation and modality
   *  They created a resource for irony called Gezi (and they used a resource called NRC)   *  They created a resource for irony called Gezi (and they used a resource called NRC)
Ligne 194: Ligne 210:
   *  secondary features: emoticons, highest and lowest sentiment scores, POS counts, named entities   *  secondary features: emoticons, highest and lowest sentiment scores, POS counts, named entities
   *  no bag of words but context aware polarity classes   *  no bag of words but context aware polarity classes
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval082.pdf|SemEval-2015 Task 12: Aspect Based Sentiment Analysis]] ====+==== SemEval-2015 Task 12: Aspect Based Sentiment Analysis ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval082.pdf|]]
   -  restaurant and laptops   -  restaurant and laptops
   -  intended to capture contradictory polarity sentences   -  intended to capture contradictory polarity sentences
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval083.pdf|NLANGP: Supervised Machine Learning System for Aspect Category Classification and Opinion Target Extraction]] ====+==== NLANGP: Supervised Machine Learning System for Aspect Category Classification and Opinion Target Extraction ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval083.pdf|]]
   *  RAS   *  RAS
  
Ligne 207: Ligne 225:
   *  Nice talk with Georgeta Bordea, she was the one who built Saffron Expert Finding System, she would be glad to collaborate or do things related to our highly qualified immigration project   *  Nice talk with Georgeta Bordea, she was the one who built Saffron Expert Finding System, she would be glad to collaborate or do things related to our highly qualified immigration project
   *  Had lunch with Greg Grefenstette: invited him to work on the Quijote project: he's already counting Don Quixote's words!   *  Had lunch with Greg Grefenstette: invited him to work on the Quijote project: he's already counting Don Quixote's words!
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval132.pdf|SemEval-2015 Task 4: TimeLine: Cross-Document Event Ordering]] (or Newsreader project) ====+==== SemEval-2015 Task 4: TimeLine: Cross-Document Event Ordering (or Newsreader project) ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval132.pdf|]]
   *  TempEval-3 corpus and evaluation methodology proposed by UzZaman (2011)   *  TempEval-3 corpus and evaluation methodology proposed by UzZaman (2011)
   *  Relations represented as timegraph   *  Relations represented as timegraph
Ligne 215: Ligne 234:
   *  First task focusing on cross-document ordering of events   *  First task focusing on cross-document ordering of events
   *  If we're thinking in AGESS we should participate in this task   *  If we're thinking in AGESS we should participate in this task
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval133.pdf|SPINOZA_VU: An NLP Pipeline for Cross Document TimeLines]] ====+==== SPINOZA_VU: An NLP Pipeline for Cross Document TimeLines ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval133.pdf|]]
   *  based on the NewsREader pipeline system (ENeko Agirre)   *  based on the NewsREader pipeline system (ENeko Agirre)
   *  subtask first addressed at document leven and then aggregated at corpus level   *  subtask first addressed at document leven and then aggregated at corpus level
Ligne 228: Ligne 248:
   *  TimeLine aggregation module:   *  TimeLine aggregation module:
   *  Very low F values... the most difficult was time ordering (low recall for temporal relations availiable)... "we were missing temporal relations for anchoring"   *  Very low F values... the most difficult was time ordering (low recall for temporal relations availiable)... "we were missing temporal relations for anchoring"
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval134.pdf|SemEval-2015 Task 5: QA TempEval - Evaluating Temporal Information Understanding with Question Answering]] ====+==== SemEval-2015 Task 5: QA TempEval - Evaluating Temporal Information Understanding with Question Answering ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval134.pdf|]]
   *  QA TempEval: TimeML was originally developped to support research in complex temporal QA. TempEval mainly focused on a more strightforward temporal inforamtion extraction. QA TempEval focuses on an end-user QA task. It is opposed to earlier corpus-based evaluation   *  QA TempEval: TimeML was originally developped to support research in complex temporal QA. TempEval mainly focused on a more strightforward temporal inforamtion extraction. QA TempEval focuses on an end-user QA task. It is opposed to earlier corpus-based evaluation
   *  This evaluation is about the accuracy for answering targeted questions   *  This evaluation is about the accuracy for answering targeted questions
Ligne 243: Ligne 264:
   *  main finding: using event co-reference may help   *  main finding: using event co-reference may help
   *  //systems are still far from deeply understanding from temporal aspects of NL (recall: 30%)//   *  //systems are still far from deeply understanding from temporal aspects of NL (recall: 30%)//
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval135.pdf|HLT-FBK: a Complete Temporal Processing System for QA TempEval]] ====+==== HLT-FBK: a Complete Temporal Processing System for QA TempEval ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval135.pdf|]]
   *  ML based (SVM in Yamcha)   *  ML based (SVM in Yamcha)
   *  Training: TimeBank and AQUAINT data from TempEval3 task   *  Training: TimeBank and AQUAINT data from TempEval3 task
Ligne 253: Ligne 275:
   *  All predicates identified by the SRL (semantic role labelling)   *  All predicates identified by the SRL (semantic role labelling)
   *  System described in Paramita Mirza and Sara Tonelli. 2014. Classifying Temporal Relations with Simple Features.   *  System described in Paramita Mirza and Sara Tonelli. 2014. Classifying Temporal Relations with Simple Features.
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval136.pdf|SemEval-2015 Task 6: Clinical TempEval]] ====+==== SemEval-2015 Task 6: Clinical TempEval ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval136.pdf|]]
   *  Clinical events identification (//April 23: the patient did not have any postoperative bleeding//)   *  Clinical events identification (//April 23: the patient did not have any postoperative bleeding//)
   *  Detection of events in relation of the time when the document was written (//narrative container relation//).   *  Detection of events in relation of the time when the document was written (//narrative container relation//).
Ligne 263: Ligne 286:
   *  narrative container relations: begin1, end1, begin2, end2   *  narrative container relations: begin1, end1, begin2, end2
   *  ML systems had better recall, rule-based systems had better precision (accuracy)   *  ML systems had better recall, rule-based systems had better precision (accuracy)
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval137.pdf|BluLab: Temporal Information Extraction for the 2015 Clinical TempEval Challenge]] ====+==== BluLab: Temporal Information Extraction for the 2015 Clinical TempEval Challenge ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval137.pdf|]]
   *  Tools: PyConText and Moonstone   *  Tools: PyConText and Moonstone
   *  initiate work on end to end temporal reasoning   *  initiate work on end to end temporal reasoning
Ligne 270: Ligne 294:
   *  Features: lexical, section, HeidelTime lexicon   *  Features: lexical, section, HeidelTime lexicon
   *  CRF++, cTAKES, lexical, semantic type, context window   *  CRF++, cTAKES, lexical, semantic type, context window
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval147.pdf|SemEval 2015, Task 7: Diachronic Text Evaluation]] ====+==== SemEval 2015, Task 7: Diachronic Text Evaluation ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval147.pdf|]]
   *  intresting task: to temporary date text snippets according to the style   *  intresting task: to temporary date text snippets according to the style
   *  linear models to extend pairwise decision   *  linear models to extend pairwise decision
Ligne 277: Ligne 302:
   *  a crawler to crawl text snippets   *  a crawler to crawl text snippets
   *  intresting for AGESS   *  intresting for AGESS
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval148.pdf|UCD : Diachronic Text Classification with Character, Word, and Syntactic N-grams]] ====+==== UCD : Diachronic Text Classification with Character, Word, and Syntactic N-grams ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval148.pdf|]]
   *  stylometric text classification   *  stylometric text classification
   *  word epoch disambiguation (Mihalcea and Nastase, 2012)   *  word epoch disambiguation (Mihalcea and Nastase, 2012)
Ligne 293: Ligne 319:
   *  character n-grams are highly effective features for diachronic classificattion (but not very satisfying)   *  character n-grams are highly effective features for diachronic classificattion (but not very satisfying)
   *  the prior distribution over date-labels has a significant domain-specific effect   *  the prior distribution over date-labels has a significant domain-specific effect
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval149.pdf|SemEval-2015 Task 8: SpaceEval]] ====+==== SemEval-2015 Task 8: SpaceEval ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval149.pdf|]]
   *  Question answering about location of objects, events   *  Question answering about location of objects, events
   *  Text to scoene conversion/visualization   *  Text to scoene conversion/visualization
Ligne 308: Ligne 335:
   *  Standard machine learning models   *  Standard machine learning models
   *  Lexical, syntactical, open sourde features   *  Lexical, syntactical, open sourde features
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval150.pdf|SpRL-CWW: Spatial Relation Classification with Independent Multi-class Models]]  ====+==== SpRL-CWW: Spatial Relation Classification with Independent Multi-class Models  ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval150.pdf|]]
   *  Spatial role labeling   *  Spatial role labeling
   *  I don't understand anything   *  I don't understand anything
Ligne 318: Ligne 346:
   *  dependency path to spatial signal, lemma, pos, direction from spatial signal.   *  dependency path to spatial signal, lemma, pos, direction from spatial signal.
   *  best features: raw string in a 5 word window, 300-dimension GloVe word vector; POS bigrams for a 5-word window (best feature)   *  best features: raw string in a 5 word window, 300-dimension GloVe word vector; POS bigrams for a 5-word window (best feature)
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval151.pdf|SemEval-2015 Task 17: Taxonomy Extraction Evaluation (TExEval)]] ====+==== SemEval-2015 Task 17: Taxonomy Extraction Evaluation (TExEval) ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval151.pdf|]]
   *  Taxonomy extraction: given a list of domain specific term, structure them in a taxonomy   *  Taxonomy extraction: given a list of domain specific term, structure them in a taxonomy
   *  Subtask: term extraction, relation discovery, taxonomy construction   *  Subtask: term extraction, relation discovery, taxonomy construction
Ligne 338: Ligne 367:
   *  taxonomy construction: approaches are less known or difficult to reimplement   *  taxonomy construction: approaches are less known or difficult to reimplement
   *  no corpus was provided and participants had no gold standard   *  no corpus was provided and participants had no gold standard
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval152.pdf|INRIASAC: Simple Hypernym Extraction Methods]] ====+==== INRIASAC: Simple Hypernym Extraction Methods ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval152.pdf|]]
   *  terms: one to nine words   *  terms: one to nine words
   *  substring inclusion: bycycle helmet < helmet (suffix)   *  substring inclusion: bycycle helmet < helmet (suffix)
Ligne 350: Ligne 380:
   *  counts of term cooccurrence in the same sentence (document frequency of terms)   *  counts of term cooccurrence in the same sentence (document frequency of terms)
   *  method: consider all domain terms B co-occurring in the same Wikipedia sentences, eliminate any candidate B that appears in fewer documents than A, retain N=3   *  method: consider all domain terms B co-occurring in the same Wikipedia sentences, eliminate any candidate B that appears in fewer documents than A, retain N=3
-==== [[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval153.pdf|SemEval 2015 Task 18: Broad-Coverage Semantic Dependency Parsing]] ====+==== SemEval 2015 Task 18: Broad-Coverage Semantic Dependency Parsing ==== 
 +[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval153.pdf|]]
   *  RAS   *  RAS
 ==== SemEval-2016 Task Announcements and closing session ==== ==== SemEval-2016 Task Announcements and closing session ====
  • Dernière modification: il y a 5 ans